Test Report: Hyper-V_Windows 19302

                    
                      686e9da65a2d4195f8e8610efbc417c3b07d1722:2024-07-19:35410
                    
                

Test fail (18/144)

x
+
TestAddons/parallel/Registry (72.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 28.9318ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-sqlx6" [57f54d5d-5da9-44a4-8d26-866c53216fc1] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0210383s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4z6j6" [0ed772e6-0b8d-489f-aa21-910d5b4fa5d9] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0119312s
addons_test.go:342: (dbg) Run:  kubectl --context addons-811100 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-811100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-811100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.2484932s)
addons_test.go:361: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-811100 ip
addons_test.go:361: (dbg) Done: out/minikube-windows-amd64.exe -p addons-811100 ip: (2.8160403s)
addons_test.go:366: expected stderr to be -empty- but got: *"W0719 03:35:25.679222    7460 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-811100 ip"
2024/07/19 03:35:28 [DEBUG] GET http://172.28.164.220:5000
addons_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-811100 addons disable registry --alsologtostderr -v=1
addons_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe -p addons-811100 addons disable registry --alsologtostderr -v=1: (16.4635894s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-811100 -n addons-811100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-811100 -n addons-811100: (13.6228413s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-811100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-811100 logs -n 25: (9.2643715s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-907700 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:26 UTC |                     |
	|         | -p download-only-907700                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:26 UTC | 19 Jul 24 03:26 UTC |
	| delete  | -p download-only-907700                                                                     | download-only-907700 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:26 UTC | 19 Jul 24 03:27 UTC |
	| start   | -o=json --download-only                                                                     | download-only-217100 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC |                     |
	|         | -p download-only-217100                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC | 19 Jul 24 03:27 UTC |
	| delete  | -p download-only-217100                                                                     | download-only-217100 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC | 19 Jul 24 03:27 UTC |
	| start   | -o=json --download-only                                                                     | download-only-641000 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC |                     |
	|         | -p download-only-641000                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                                                         |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC | 19 Jul 24 03:27 UTC |
	| delete  | -p download-only-641000                                                                     | download-only-641000 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC | 19 Jul 24 03:27 UTC |
	| delete  | -p download-only-907700                                                                     | download-only-907700 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC | 19 Jul 24 03:27 UTC |
	| delete  | -p download-only-217100                                                                     | download-only-217100 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC | 19 Jul 24 03:27 UTC |
	| delete  | -p download-only-641000                                                                     | download-only-641000 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC | 19 Jul 24 03:27 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-056600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC |                     |
	|         | binary-mirror-056600                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:58266                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-056600                                                                     | binary-mirror-056600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC | 19 Jul 24 03:27 UTC |
	| addons  | disable dashboard -p                                                                        | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC |                     |
	|         | addons-811100                                                                               |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC |                     |
	|         | addons-811100                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-811100 --wait=true                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC | 19 Jul 24 03:35 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |                   |         |                     |                     |
	|         | --driver=hyperv --addons=ingress                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:35 UTC |
	|         | -p addons-811100                                                                            |                      |                   |         |                     |                     |
	| ip      | addons-811100 ip                                                                            | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:35 UTC |
	| addons  | addons-811100 addons disable                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:35 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| ssh     | addons-811100 ssh cat                                                                       | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:35 UTC |
	|         | /opt/local-path-provisioner/pvc-114f4030-a1d1-4247-ab71-0d8af834e357_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:35 UTC |
	|         | addons-811100                                                                               |                      |                   |         |                     |                     |
	| addons  | addons-811100 addons disable                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC |                     |
	|         | -p addons-811100                                                                            |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 03:27:41
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 03:27:41.597223    3540 out.go:291] Setting OutFile to fd 888 ...
	I0719 03:27:41.598217    3540 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:27:41.598217    3540 out.go:304] Setting ErrFile to fd 848...
	I0719 03:27:41.598217    3540 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:27:41.625439    3540 out.go:298] Setting JSON to false
	I0719 03:27:41.628608    3540 start.go:129] hostinfo: {"hostname":"minikube6","uptime":18687,"bootTime":1721340973,"procs":183,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0719 03:27:41.628608    3540 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 03:27:41.654552    3540 out.go:177] * [addons-811100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 03:27:41.659016    3540 notify.go:220] Checking for updates...
	I0719 03:27:41.659850    3540 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 03:27:41.663665    3540 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 03:27:41.667538    3540 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0719 03:27:41.670329    3540 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 03:27:41.673094    3540 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 03:27:41.677119    3540 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:27:47.105435    3540 out.go:177] * Using the hyperv driver based on user configuration
	I0719 03:27:47.109562    3540 start.go:297] selected driver: hyperv
	I0719 03:27:47.109562    3540 start.go:901] validating driver "hyperv" against <nil>
	I0719 03:27:47.109562    3540 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 03:27:47.157547    3540 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 03:27:47.158831    3540 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 03:27:47.158831    3540 cni.go:84] Creating CNI manager for ""
	I0719 03:27:47.158831    3540 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 03:27:47.158831    3540 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 03:27:47.159438    3540 start.go:340] cluster config:
	{Name:addons-811100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-811100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:27:47.159438    3540 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 03:27:47.164213    3540 out.go:177] * Starting "addons-811100" primary control-plane node in "addons-811100" cluster
	I0719 03:27:47.167293    3540 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 03:27:47.167293    3540 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 03:27:47.167293    3540 cache.go:56] Caching tarball of preloaded images
	I0719 03:27:47.168018    3540 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 03:27:47.168338    3540 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 03:27:47.168510    3540 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\config.json ...
	I0719 03:27:47.168510    3540 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\config.json: {Name:mka0de3457115693dee9f0dccf0575e02425eafd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:27:47.170333    3540 start.go:360] acquireMachinesLock for addons-811100: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 03:27:47.170519    3540 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-811100"
	I0719 03:27:47.170706    3540 start.go:93] Provisioning new machine with config: &{Name:addons-811100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:addons-811100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 03:27:47.170706    3540 start.go:125] createHost starting for "" (driver="hyperv")
	I0719 03:27:47.177978    3540 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0719 03:27:47.178670    3540 start.go:159] libmachine.API.Create for "addons-811100" (driver="hyperv")
	I0719 03:27:47.178670    3540 client.go:168] LocalClient.Create starting
	I0719 03:27:47.179382    3540 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0719 03:27:47.413306    3540 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0719 03:27:47.540758    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0719 03:27:49.717038    3540 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0719 03:27:49.717038    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:27:49.717038    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0719 03:27:51.430966    3540 main.go:141] libmachine: [stdout =====>] : False
	
	I0719 03:27:51.430966    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:27:51.431083    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0719 03:27:52.926589    3540 main.go:141] libmachine: [stdout =====>] : True
	
	I0719 03:27:52.926589    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:27:52.926826    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0719 03:27:56.623659    3540 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0719 03:27:56.624501    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:27:56.627198    3540 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 03:27:57.070873    3540 main.go:141] libmachine: Creating SSH key...
	I0719 03:27:57.413948    3540 main.go:141] libmachine: Creating VM...
	I0719 03:27:57.413948    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0719 03:28:00.266697    3540 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0719 03:28:00.266743    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:00.266879    3540 main.go:141] libmachine: Using switch "Default Switch"
	I0719 03:28:00.267099    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0719 03:28:02.020262    3540 main.go:141] libmachine: [stdout =====>] : True
	
	I0719 03:28:02.020379    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:02.020379    3540 main.go:141] libmachine: Creating VHD
	I0719 03:28:02.020379    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0719 03:28:05.838160    3540 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 100000FB-0458-42E7-A317-5F4C0A6EBAA0
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0719 03:28:05.838225    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:05.838225    3540 main.go:141] libmachine: Writing magic tar header
	I0719 03:28:05.838225    3540 main.go:141] libmachine: Writing SSH key tar header
	I0719 03:28:05.848300    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0719 03:28:09.119480    3540 main.go:141] libmachine: [stdout =====>] : 
	I0719 03:28:09.119480    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:09.120057    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\disk.vhd' -SizeBytes 20000MB
	I0719 03:28:11.675465    3540 main.go:141] libmachine: [stdout =====>] : 
	I0719 03:28:11.675748    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:11.675748    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-811100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0719 03:28:15.404581    3540 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-811100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0719 03:28:15.404765    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:15.404765    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-811100 -DynamicMemoryEnabled $false
	I0719 03:28:17.657033    3540 main.go:141] libmachine: [stdout =====>] : 
	I0719 03:28:17.657595    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:17.657977    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-811100 -Count 2
	I0719 03:28:19.863743    3540 main.go:141] libmachine: [stdout =====>] : 
	I0719 03:28:19.864371    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:19.864371    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-811100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\boot2docker.iso'
	I0719 03:28:22.443210    3540 main.go:141] libmachine: [stdout =====>] : 
	I0719 03:28:22.443670    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:22.443670    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-811100 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\disk.vhd'
	I0719 03:28:25.113958    3540 main.go:141] libmachine: [stdout =====>] : 
	I0719 03:28:25.115084    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:25.115084    3540 main.go:141] libmachine: Starting VM...
	I0719 03:28:25.115187    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-811100
	I0719 03:28:28.295927    3540 main.go:141] libmachine: [stdout =====>] : 
	I0719 03:28:28.295927    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:28.295927    3540 main.go:141] libmachine: Waiting for host to start...
	I0719 03:28:28.296651    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:28:30.601669    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:28:30.601846    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:30.601925    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:28:33.108538    3540 main.go:141] libmachine: [stdout =====>] : 
	I0719 03:28:33.108538    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:34.116102    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:28:36.375069    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:28:36.376162    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:36.376273    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:28:38.910166    3540 main.go:141] libmachine: [stdout =====>] : 
	I0719 03:28:38.910166    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:39.921605    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:28:42.124994    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:28:42.124994    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:42.124994    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:28:44.687451    3540 main.go:141] libmachine: [stdout =====>] : 
	I0719 03:28:44.687451    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:45.690792    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:28:47.937003    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:28:47.937003    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:47.937460    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:28:50.488437    3540 main.go:141] libmachine: [stdout =====>] : 
	I0719 03:28:50.488895    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:51.496514    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:28:53.772035    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:28:53.772035    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:53.772035    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:28:56.353122    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:28:56.353122    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:56.353348    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:28:58.524215    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:28:58.524215    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:28:58.524215    3540 machine.go:94] provisionDockerMachine start ...
	I0719 03:28:58.524215    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:29:00.722787    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:29:00.722787    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:00.723541    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:29:03.259536    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:29:03.259536    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:03.264770    3540 main.go:141] libmachine: Using SSH client type: native
	I0719 03:29:03.278122    3540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.164.220 22 <nil> <nil>}
	I0719 03:29:03.278122    3540 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 03:29:03.405141    3540 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 03:29:03.405468    3540 buildroot.go:166] provisioning hostname "addons-811100"
	I0719 03:29:03.405468    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:29:05.573905    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:29:05.573905    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:05.574459    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:29:08.199629    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:29:08.199629    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:08.205977    3540 main.go:141] libmachine: Using SSH client type: native
	I0719 03:29:08.206142    3540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.164.220 22 <nil> <nil>}
	I0719 03:29:08.206142    3540 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-811100 && echo "addons-811100" | sudo tee /etc/hostname
	I0719 03:29:08.357598    3540 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-811100
	
	I0719 03:29:08.357598    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:29:10.499301    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:29:10.499301    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:10.499754    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:29:13.085701    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:29:13.085701    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:13.091816    3540 main.go:141] libmachine: Using SSH client type: native
	I0719 03:29:13.092534    3540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.164.220 22 <nil> <nil>}
	I0719 03:29:13.092534    3540 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-811100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-811100/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-811100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 03:29:13.247825    3540 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 03:29:13.247955    3540 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 03:29:13.247984    3540 buildroot.go:174] setting up certificates
	I0719 03:29:13.247984    3540 provision.go:84] configureAuth start
	I0719 03:29:13.247984    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:29:15.406688    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:29:15.407730    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:15.407730    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:29:17.976507    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:29:17.976932    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:17.976995    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:29:20.125899    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:29:20.126871    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:20.126871    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:29:22.663643    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:29:22.663643    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:22.663783    3540 provision.go:143] copyHostCerts
	I0719 03:29:22.664817    3540 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 03:29:22.667503    3540 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 03:29:22.669704    3540 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 03:29:22.671164    3540 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-811100 san=[127.0.0.1 172.28.164.220 addons-811100 localhost minikube]
	I0719 03:29:22.816303    3540 provision.go:177] copyRemoteCerts
	I0719 03:29:22.828154    3540 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 03:29:22.828372    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:29:24.966136    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:29:24.966136    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:24.966136    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:29:27.564480    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:29:27.564480    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:27.565281    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:29:27.668859    3540 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8406465s)
	I0719 03:29:27.669478    3540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 03:29:27.717271    3540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 03:29:27.763721    3540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 03:29:27.811573    3540 provision.go:87] duration metric: took 14.5634126s to configureAuth
	I0719 03:29:27.811573    3540 buildroot.go:189] setting minikube options for container-runtime
	I0719 03:29:27.812721    3540 config.go:182] Loaded profile config "addons-811100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 03:29:27.812859    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:29:29.966727    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:29:29.966727    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:29.967652    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:29:32.513775    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:29:32.513994    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:32.519440    3540 main.go:141] libmachine: Using SSH client type: native
	I0719 03:29:32.519665    3540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.164.220 22 <nil> <nil>}
	I0719 03:29:32.519665    3540 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 03:29:32.650708    3540 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 03:29:32.650708    3540 buildroot.go:70] root file system type: tmpfs
	I0719 03:29:32.651240    3540 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 03:29:32.651240    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:29:34.838682    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:29:34.838682    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:34.838682    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:29:37.446044    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:29:37.446044    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:37.452481    3540 main.go:141] libmachine: Using SSH client type: native
	I0719 03:29:37.452481    3540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.164.220 22 <nil> <nil>}
	I0719 03:29:37.453088    3540 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 03:29:37.600559    3540 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 03:29:37.601090    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:29:39.772731    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:29:39.773591    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:39.773690    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:29:42.364854    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:29:42.364854    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:42.371269    3540 main.go:141] libmachine: Using SSH client type: native
	I0719 03:29:42.372026    3540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.164.220 22 <nil> <nil>}
	I0719 03:29:42.372026    3540 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 03:29:44.605317    3540 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 03:29:44.605317    3540 machine.go:97] duration metric: took 46.0805443s to provisionDockerMachine
	I0719 03:29:44.605317    3540 client.go:171] duration metric: took 1m57.4252265s to LocalClient.Create
	I0719 03:29:44.605317    3540 start.go:167] duration metric: took 1m57.4252265s to libmachine.API.Create "addons-811100"
	I0719 03:29:44.606239    3540 start.go:293] postStartSetup for "addons-811100" (driver="hyperv")
	I0719 03:29:44.606239    3540 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 03:29:44.617773    3540 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 03:29:44.617773    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:29:46.784345    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:29:46.785123    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:46.785123    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:29:49.352501    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:29:49.352501    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:49.353913    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:29:49.460760    3540 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8429282s)
	I0719 03:29:49.471826    3540 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 03:29:49.479148    3540 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 03:29:49.479148    3540 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 03:29:49.479416    3540 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 03:29:49.479416    3540 start.go:296] duration metric: took 4.8731185s for postStartSetup
	I0719 03:29:49.482368    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:29:51.603489    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:29:51.604747    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:51.604859    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:29:54.159795    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:29:54.160614    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:54.160861    3540 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\config.json ...
	I0719 03:29:54.164348    3540 start.go:128] duration metric: took 2m6.9920341s to createHost
	I0719 03:29:54.164440    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:29:56.273426    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:29:56.273622    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:56.273815    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:29:58.844880    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:29:58.845301    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:29:58.850904    3540 main.go:141] libmachine: Using SSH client type: native
	I0719 03:29:58.851501    3540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.164.220 22 <nil> <nil>}
	I0719 03:29:58.851501    3540 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 03:29:58.973080    3540 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721359798.982628331
	
	I0719 03:29:58.973080    3540 fix.go:216] guest clock: 1721359798.982628331
	I0719 03:29:58.973080    3540 fix.go:229] Guest: 2024-07-19 03:29:58.982628331 +0000 UTC Remote: 2024-07-19 03:29:54.1643487 +0000 UTC m=+132.721080901 (delta=4.818279631s)
	I0719 03:29:58.973080    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:30:01.126362    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:30:01.126362    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:30:01.126656    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:30:03.727954    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:30:03.727954    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:30:03.733568    3540 main.go:141] libmachine: Using SSH client type: native
	I0719 03:30:03.734376    3540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.164.220 22 <nil> <nil>}
	I0719 03:30:03.734376    3540 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721359798
	I0719 03:30:03.882292    3540 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 03:29:58 UTC 2024
	
	I0719 03:30:03.882356    3540 fix.go:236] clock set: Fri Jul 19 03:29:58 UTC 2024
	 (err=<nil>)
	I0719 03:30:03.882356    3540 start.go:83] releasing machines lock for "addons-811100", held for 2m16.7101456s
	I0719 03:30:03.882634    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:30:06.032472    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:30:06.033331    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:30:06.033331    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:30:08.705449    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:30:08.705449    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:30:08.710627    3540 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 03:30:08.710781    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:30:08.725295    3540 ssh_runner.go:195] Run: cat /version.json
	I0719 03:30:08.725295    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:30:10.965491    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:30:10.966385    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:30:10.966385    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:30:10.985815    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:30:10.985815    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:30:10.986364    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:30:13.649288    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:30:13.649288    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:30:13.650274    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:30:13.673403    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:30:13.673493    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:30:13.675203    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:30:13.740436    3540 ssh_runner.go:235] Completed: cat /version.json: (5.0150798s)
	I0719 03:30:13.751815    3540 ssh_runner.go:195] Run: systemctl --version
	I0719 03:30:13.756762    3540 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0460739s)
	W0719 03:30:13.756762    3540 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 03:30:13.772613    3540 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 03:30:13.780735    3540 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 03:30:13.793951    3540 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 03:30:13.822985    3540 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 03:30:13.822985    3540 start.go:495] detecting cgroup driver to use...
	I0719 03:30:13.822985    3540 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 03:30:13.870394    3540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 03:30:13.899803    3540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0719 03:30:13.906325    3540 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 03:30:13.906325    3540 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 03:30:13.922644    3540 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 03:30:13.934454    3540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 03:30:13.965246    3540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 03:30:13.994918    3540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 03:30:14.024367    3540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 03:30:14.055857    3540 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 03:30:14.085510    3540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 03:30:14.115109    3540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 03:30:14.144545    3540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 03:30:14.174304    3540 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 03:30:14.205129    3540 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 03:30:14.235321    3540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:30:14.439004    3540 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 03:30:14.468536    3540 start.go:495] detecting cgroup driver to use...
	I0719 03:30:14.480133    3540 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 03:30:14.515057    3540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 03:30:14.549397    3540 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 03:30:14.589336    3540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 03:30:14.622058    3540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 03:30:14.654240    3540 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 03:30:14.712831    3540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 03:30:14.736250    3540 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 03:30:14.778171    3540 ssh_runner.go:195] Run: which cri-dockerd
	I0719 03:30:14.794683    3540 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 03:30:14.813232    3540 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 03:30:14.857312    3540 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 03:30:15.050078    3540 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 03:30:15.242563    3540 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 03:30:15.242920    3540 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 03:30:15.285205    3540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:30:15.487288    3540 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 03:30:18.056119    3540 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5687421s)
	I0719 03:30:18.069023    3540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 03:30:18.106726    3540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 03:30:18.139819    3540 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 03:30:18.349114    3540 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 03:30:18.570354    3540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:30:18.772559    3540 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 03:30:18.816496    3540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 03:30:18.855662    3540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:30:19.068170    3540 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 03:30:19.179211    3540 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 03:30:19.194828    3540 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 03:30:19.203071    3540 start.go:563] Will wait 60s for crictl version
	I0719 03:30:19.216449    3540 ssh_runner.go:195] Run: which crictl
	I0719 03:30:19.235446    3540 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 03:30:19.288675    3540 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 03:30:19.301107    3540 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 03:30:19.344824    3540 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 03:30:19.378503    3540 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0719 03:30:19.378770    3540 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0719 03:30:19.382953    3540 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0719 03:30:19.382953    3540 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0719 03:30:19.382953    3540 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0719 03:30:19.382953    3540 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:e9:18 Flags:up|broadcast|multicast|running}
	I0719 03:30:19.386082    3540 ip.go:210] interface addr: fe80::1dc5:162d:cec2:b9bd/64
	I0719 03:30:19.386082    3540 ip.go:210] interface addr: 172.28.160.1/20
	I0719 03:30:19.399517    3540 ssh_runner.go:195] Run: grep 172.28.160.1	host.minikube.internal$ /etc/hosts
	I0719 03:30:19.406079    3540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 03:30:19.428733    3540 kubeadm.go:883] updating cluster {Name:addons-811100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:addons-811100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.164.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 03:30:19.428961    3540 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 03:30:19.438404    3540 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 03:30:19.461383    3540 docker.go:685] Got preloaded images: 
	I0719 03:30:19.461383    3540 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0719 03:30:19.475878    3540 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 03:30:19.505757    3540 ssh_runner.go:195] Run: which lz4
	I0719 03:30:19.526046    3540 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 03:30:19.532187    3540 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 03:30:19.532287    3540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0719 03:30:21.393773    3540 docker.go:649] duration metric: took 1.8818638s to copy over tarball
	I0719 03:30:21.407151    3540 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 03:30:26.594703    3540 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.1874892s)
	I0719 03:30:26.594703    3540 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 03:30:26.657352    3540 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 03:30:26.676097    3540 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0719 03:30:26.729490    3540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:30:26.938340    3540 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 03:30:33.684531    3540 ssh_runner.go:235] Completed: sudo systemctl restart docker: (6.7461086s)
	I0719 03:30:33.694671    3540 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 03:30:33.723426    3540 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 03:30:33.723499    3540 cache_images.go:84] Images are preloaded, skipping loading
	I0719 03:30:33.723639    3540 kubeadm.go:934] updating node { 172.28.164.220 8443 v1.30.3 docker true true} ...
	I0719 03:30:33.723775    3540 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-811100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.164.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-811100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 03:30:33.731866    3540 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 03:30:33.766788    3540 cni.go:84] Creating CNI manager for ""
	I0719 03:30:33.766788    3540 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 03:30:33.767391    3540 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 03:30:33.767391    3540 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.164.220 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-811100 NodeName:addons-811100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.164.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.164.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 03:30:33.767669    3540 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.164.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-811100"
	  kubeletExtraArgs:
	    node-ip: 172.28.164.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.164.220"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 03:30:33.779469    3540 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 03:30:33.797121    3540 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 03:30:33.811871    3540 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 03:30:33.828848    3540 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0719 03:30:33.863025    3540 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 03:30:33.893700    3540 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0719 03:30:33.935962    3540 ssh_runner.go:195] Run: grep 172.28.164.220	control-plane.minikube.internal$ /etc/hosts
	I0719 03:30:33.941356    3540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.164.220	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 03:30:33.976157    3540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:30:34.180781    3540 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 03:30:34.207802    3540 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100 for IP: 172.28.164.220
	I0719 03:30:34.207873    3540 certs.go:194] generating shared ca certs ...
	I0719 03:30:34.207952    3540 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:30:34.208437    3540 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0719 03:30:34.508843    3540 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt ...
	I0719 03:30:34.508843    3540 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt: {Name:mkb0ebdce3b528a3c449211fdfbba2d86c130c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:30:34.509563    3540 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key ...
	I0719 03:30:34.510659    3540 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key: {Name:mk1ec59eaa4c2f7a35370569c3fc13a80bc1499d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:30:34.511840    3540 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0719 03:30:34.698132    3540 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0719 03:30:34.698132    3540 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mk78efc1a7bd38719c2f7a853f9109f9a1a3252e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:30:34.699747    3540 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key ...
	I0719 03:30:34.699747    3540 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key: {Name:mk57de77abeaf23b535083770f5522a07b562b59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:30:34.701222    3540 certs.go:256] generating profile certs ...
	I0719 03:30:34.701222    3540 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.key
	I0719 03:30:34.701222    3540 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt with IP's: []
	I0719 03:30:35.162147    3540 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt ...
	I0719 03:30:35.162147    3540 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: {Name:mk6853e51569ad7986c8337fef713a8c6ccc8237 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:30:35.163680    3540 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.key ...
	I0719 03:30:35.163680    3540 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.key: {Name:mk1b61983e2ba209a63d43f41a8ac9914d7a805b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:30:35.164762    3540 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\apiserver.key.6ce202ba
	I0719 03:30:35.164762    3540 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\apiserver.crt.6ce202ba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.164.220]
	I0719 03:30:35.475019    3540 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\apiserver.crt.6ce202ba ...
	I0719 03:30:35.475019    3540 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\apiserver.crt.6ce202ba: {Name:mk8b7869296fdbc791ba52662a382f29c48144b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:30:35.476276    3540 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\apiserver.key.6ce202ba ...
	I0719 03:30:35.476276    3540 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\apiserver.key.6ce202ba: {Name:mk4bd3ab5c8eeb9b0028e4330a4240c461f11512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:30:35.477266    3540 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\apiserver.crt.6ce202ba -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\apiserver.crt
	I0719 03:30:35.489276    3540 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\apiserver.key.6ce202ba -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\apiserver.key
	I0719 03:30:35.490601    3540 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\proxy-client.key
	I0719 03:30:35.490601    3540 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\proxy-client.crt with IP's: []
	I0719 03:30:35.702849    3540 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\proxy-client.crt ...
	I0719 03:30:35.702849    3540 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\proxy-client.crt: {Name:mkd9e085cc50c25d5a96950e0854a288e9252310 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:30:35.704562    3540 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\proxy-client.key ...
	I0719 03:30:35.704562    3540 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\proxy-client.key: {Name:mk1f33e3450875e44544cd022d231b7c0bfea65f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:30:35.716386    3540 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0719 03:30:35.717435    3540 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0719 03:30:35.717435    3540 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0719 03:30:35.718088    3540 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0719 03:30:35.720264    3540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 03:30:35.770204    3540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 03:30:35.821347    3540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 03:30:35.864927    3540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 03:30:35.911361    3540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0719 03:30:35.961086    3540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 03:30:36.010319    3540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 03:30:36.061321    3540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 03:30:36.109719    3540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 03:30:36.160223    3540 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 03:30:36.209622    3540 ssh_runner.go:195] Run: openssl version
	I0719 03:30:36.235240    3540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 03:30:36.268248    3540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 03:30:36.275885    3540 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 03:30:36.292396    3540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 03:30:36.311667    3540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 03:30:36.346663    3540 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 03:30:36.353478    3540 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 03:30:36.353777    3540 kubeadm.go:392] StartCluster: {Name:addons-811100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:addons-811100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.164.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:30:36.364407    3540 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 03:30:36.401999    3540 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 03:30:36.433369    3540 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 03:30:36.465089    3540 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 03:30:36.483074    3540 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 03:30:36.483074    3540 kubeadm.go:157] found existing configuration files:
	
	I0719 03:30:36.494798    3540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 03:30:36.517747    3540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 03:30:36.531452    3540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 03:30:36.564435    3540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 03:30:36.579786    3540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 03:30:36.593231    3540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 03:30:36.622483    3540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 03:30:36.639054    3540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 03:30:36.652618    3540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 03:30:36.682882    3540 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 03:30:36.703252    3540 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 03:30:36.715448    3540 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 03:30:36.742914    3540 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 03:30:37.002360    3540 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 03:30:51.307730    3540 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 03:30:51.307880    3540 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 03:30:51.308029    3540 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 03:30:51.308029    3540 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 03:30:51.308738    3540 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 03:30:51.308770    3540 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 03:30:51.312867    3540 out.go:204]   - Generating certificates and keys ...
	I0719 03:30:51.313257    3540 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 03:30:51.313430    3540 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 03:30:51.313674    3540 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 03:30:51.313853    3540 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 03:30:51.313999    3540 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 03:30:51.314213    3540 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 03:30:51.314359    3540 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 03:30:51.314778    3540 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-811100 localhost] and IPs [172.28.164.220 127.0.0.1 ::1]
	I0719 03:30:51.315040    3540 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 03:30:51.315196    3540 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-811100 localhost] and IPs [172.28.164.220 127.0.0.1 ::1]
	I0719 03:30:51.315196    3540 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 03:30:51.315728    3540 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 03:30:51.315858    3540 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 03:30:51.315858    3540 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 03:30:51.315858    3540 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 03:30:51.315858    3540 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 03:30:51.316389    3540 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 03:30:51.316587    3540 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 03:30:51.316587    3540 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 03:30:51.316587    3540 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 03:30:51.316587    3540 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 03:30:51.320666    3540 out.go:204]   - Booting up control plane ...
	I0719 03:30:51.320929    3540 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 03:30:51.321080    3540 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 03:30:51.321369    3540 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 03:30:51.321665    3540 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 03:30:51.321825    3540 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 03:30:51.321984    3540 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 03:30:51.322277    3540 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 03:30:51.322459    3540 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 03:30:51.322570    3540 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001395978s
	I0719 03:30:51.322570    3540 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 03:30:51.322570    3540 kubeadm.go:310] [api-check] The API server is healthy after 6.503017647s
	I0719 03:30:51.322570    3540 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 03:30:51.322570    3540 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 03:30:51.322570    3540 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 03:30:51.323668    3540 kubeadm.go:310] [mark-control-plane] Marking the node addons-811100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 03:30:51.323668    3540 kubeadm.go:310] [bootstrap-token] Using token: uh6r2c.2fp704kw10xaspyn
	I0719 03:30:51.328681    3540 out.go:204]   - Configuring RBAC rules ...
	I0719 03:30:51.328681    3540 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 03:30:51.328681    3540 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 03:30:51.328681    3540 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 03:30:51.328681    3540 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 03:30:51.328681    3540 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 03:30:51.329898    3540 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 03:30:51.329898    3540 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 03:30:51.329898    3540 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 03:30:51.329898    3540 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 03:30:51.329898    3540 kubeadm.go:310] 
	I0719 03:30:51.329898    3540 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 03:30:51.329898    3540 kubeadm.go:310] 
	I0719 03:30:51.329898    3540 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 03:30:51.329898    3540 kubeadm.go:310] 
	I0719 03:30:51.329898    3540 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 03:30:51.330894    3540 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 03:30:51.330894    3540 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 03:30:51.330894    3540 kubeadm.go:310] 
	I0719 03:30:51.330894    3540 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 03:30:51.330894    3540 kubeadm.go:310] 
	I0719 03:30:51.330894    3540 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 03:30:51.330894    3540 kubeadm.go:310] 
	I0719 03:30:51.330894    3540 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 03:30:51.330894    3540 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 03:30:51.330894    3540 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 03:30:51.331882    3540 kubeadm.go:310] 
	I0719 03:30:51.331882    3540 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 03:30:51.331882    3540 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 03:30:51.331882    3540 kubeadm.go:310] 
	I0719 03:30:51.331882    3540 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uh6r2c.2fp704kw10xaspyn \
	I0719 03:30:51.331882    3540 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 \
	I0719 03:30:51.331882    3540 kubeadm.go:310] 	--control-plane 
	I0719 03:30:51.331882    3540 kubeadm.go:310] 
	I0719 03:30:51.331882    3540 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 03:30:51.331882    3540 kubeadm.go:310] 
	I0719 03:30:51.332920    3540 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uh6r2c.2fp704kw10xaspyn \
	I0719 03:30:51.332920    3540 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 
	I0719 03:30:51.332920    3540 cni.go:84] Creating CNI manager for ""
	I0719 03:30:51.332920    3540 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 03:30:51.335968    3540 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 03:30:51.348586    3540 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 03:30:51.368565    3540 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0719 03:30:51.400403    3540 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 03:30:51.414708    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-811100 minikube.k8s.io/updated_at=2024_07_19T03_30_51_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=addons-811100 minikube.k8s.io/primary=true
	I0719 03:30:51.415701    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:30:51.424857    3540 ops.go:34] apiserver oom_adj: -16
	I0719 03:30:51.617742    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:30:52.133165    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:30:52.634906    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:30:53.122527    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:30:53.626474    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:30:54.125492    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:30:54.633107    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:30:55.134651    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:30:55.621724    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:30:56.123665    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:30:56.627461    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:30:57.135496    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:30:57.631947    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:30:58.135461    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:30:58.634157    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:30:59.122858    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:30:59.622794    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:31:00.122069    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:31:00.623792    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:31:01.129784    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:31:01.620443    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:31:02.121945    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:31:02.622900    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:31:03.129751    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:31:03.636391    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:31:04.126863    3540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 03:31:04.238635    3540 kubeadm.go:1113] duration metric: took 12.8379782s to wait for elevateKubeSystemPrivileges
	I0719 03:31:04.239207    3540 kubeadm.go:394] duration metric: took 27.8850489s to StartCluster
	I0719 03:31:04.239301    3540 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:31:04.239398    3540 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 03:31:04.240204    3540 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:31:04.242226    3540 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 03:31:04.242341    3540 start.go:235] Will wait 6m0s for node &{Name: IP:172.28.164.220 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 03:31:04.242522    3540 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0719 03:31:04.242522    3540 addons.go:69] Setting yakd=true in profile "addons-811100"
	I0719 03:31:04.242522    3540 addons.go:69] Setting metrics-server=true in profile "addons-811100"
	I0719 03:31:04.242522    3540 addons.go:69] Setting helm-tiller=true in profile "addons-811100"
	I0719 03:31:04.242522    3540 addons.go:234] Setting addon helm-tiller=true in "addons-811100"
	I0719 03:31:04.242522    3540 addons.go:234] Setting addon yakd=true in "addons-811100"
	I0719 03:31:04.242522    3540 addons.go:234] Setting addon metrics-server=true in "addons-811100"
	I0719 03:31:04.242522    3540 addons.go:69] Setting default-storageclass=true in profile "addons-811100"
	I0719 03:31:04.242522    3540 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-811100"
	I0719 03:31:04.242522    3540 addons.go:69] Setting storage-provisioner=true in profile "addons-811100"
	I0719 03:31:04.242522    3540 host.go:66] Checking if "addons-811100" exists ...
	I0719 03:31:04.243070    3540 addons.go:234] Setting addon storage-provisioner=true in "addons-811100"
	I0719 03:31:04.243070    3540 host.go:66] Checking if "addons-811100" exists ...
	I0719 03:31:04.243184    3540 addons.go:69] Setting gcp-auth=true in profile "addons-811100"
	I0719 03:31:04.243239    3540 host.go:66] Checking if "addons-811100" exists ...
	I0719 03:31:04.243239    3540 mustload.go:65] Loading cluster: addons-811100
	I0719 03:31:04.243343    3540 addons.go:69] Setting ingress-dns=true in profile "addons-811100"
	I0719 03:31:04.243343    3540 addons.go:234] Setting addon ingress-dns=true in "addons-811100"
	I0719 03:31:04.243551    3540 addons.go:69] Setting inspektor-gadget=true in profile "addons-811100"
	I0719 03:31:04.243551    3540 addons.go:69] Setting ingress=true in profile "addons-811100"
	I0719 03:31:04.243551    3540 addons.go:234] Setting addon inspektor-gadget=true in "addons-811100"
	I0719 03:31:04.243551    3540 host.go:66] Checking if "addons-811100" exists ...
	I0719 03:31:04.242522    3540 config.go:182] Loaded profile config "addons-811100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 03:31:04.243773    3540 host.go:66] Checking if "addons-811100" exists ...
	I0719 03:31:04.242522    3540 host.go:66] Checking if "addons-811100" exists ...
	I0719 03:31:04.244079    3540 config.go:182] Loaded profile config "addons-811100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 03:31:04.244079    3540 addons.go:69] Setting volcano=true in profile "addons-811100"
	I0719 03:31:04.244239    3540 addons.go:234] Setting addon volcano=true in "addons-811100"
	I0719 03:31:04.244388    3540 host.go:66] Checking if "addons-811100" exists ...
	I0719 03:31:04.244443    3540 addons.go:69] Setting volumesnapshots=true in profile "addons-811100"
	I0719 03:31:04.244500    3540 addons.go:234] Setting addon volumesnapshots=true in "addons-811100"
	I0719 03:31:04.244500    3540 addons.go:69] Setting registry=true in profile "addons-811100"
	I0719 03:31:04.244500    3540 addons.go:234] Setting addon registry=true in "addons-811100"
	I0719 03:31:04.244636    3540 host.go:66] Checking if "addons-811100" exists ...
	I0719 03:31:04.243239    3540 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-811100"
	I0719 03:31:04.244775    3540 host.go:66] Checking if "addons-811100" exists ...
	I0719 03:31:04.244775    3540 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-811100"
	I0719 03:31:04.244866    3540 host.go:66] Checking if "addons-811100" exists ...
	I0719 03:31:04.242522    3540 addons.go:69] Setting cloud-spanner=true in profile "addons-811100"
	I0719 03:31:04.244992    3540 addons.go:234] Setting addon cloud-spanner=true in "addons-811100"
	I0719 03:31:04.245081    3540 host.go:66] Checking if "addons-811100" exists ...
	I0719 03:31:04.243551    3540 addons.go:234] Setting addon ingress=true in "addons-811100"
	I0719 03:31:04.245272    3540 host.go:66] Checking if "addons-811100" exists ...
	I0719 03:31:04.245382    3540 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-811100"
	I0719 03:31:04.242522    3540 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-811100"
	I0719 03:31:04.245509    3540 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-811100"
	I0719 03:31:04.245676    3540 host.go:66] Checking if "addons-811100" exists ...
	I0719 03:31:04.245447    3540 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-811100"
	I0719 03:31:04.246770    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:04.247497    3540 out.go:177] * Verifying Kubernetes components...
	I0719 03:31:04.250121    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:04.252365    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:04.252365    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:04.252365    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:04.252365    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:04.252365    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:04.253595    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:04.253880    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:04.254641    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:04.255447    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:04.255612    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:04.255999    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:04.256898    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:04.257585    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:04.260580    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:04.279616    3540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:31:05.573577    3540 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.3312205s)
	I0719 03:31:05.573577    3540 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 03:31:05.573577    3540 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.2939454s)
	I0719 03:31:05.595723    3540 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 03:31:08.040583    3540 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.4669759s)
	I0719 03:31:08.040583    3540 start.go:971] {"host.minikube.internal": 172.28.160.1} host record injected into CoreDNS's ConfigMap
	I0719 03:31:08.047583    3540 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.4518307s)
	I0719 03:31:08.050582    3540 node_ready.go:35] waiting up to 6m0s for node "addons-811100" to be "Ready" ...
	I0719 03:31:08.430588    3540 node_ready.go:49] node "addons-811100" has status "Ready":"True"
	I0719 03:31:08.430588    3540 node_ready.go:38] duration metric: took 380.0012ms for node "addons-811100" to be "Ready" ...
	I0719 03:31:08.430588    3540 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	W0719 03:31:09.127896    3540 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "addons-811100" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0719 03:31:09.127896    3540 start.go:160] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0719 03:31:09.776617    3540 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace to be "Ready" ...
	I0719 03:31:11.391775    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:11.391861    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:11.400730    3540 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0719 03:31:11.411883    3540 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 03:31:11.418836    3540 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 03:31:11.424850    3540 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0719 03:31:11.424850    3540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0719 03:31:11.424850    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:11.509034    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:11.510032    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:11.511032    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:11.511032    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:11.515102    3540 addons.go:234] Setting addon default-storageclass=true in "addons-811100"
	I0719 03:31:11.515102    3540 host.go:66] Checking if "addons-811100" exists ...
	I0719 03:31:11.516054    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:11.517024    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:11.517024    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:11.518142    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:11.520309    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:11.520309    3540 host.go:66] Checking if "addons-811100" exists ...
	I0719 03:31:11.520020    3540 out.go:177]   - Using image docker.io/registry:2.8.3
	I0719 03:31:11.524449    3540 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0719 03:31:11.531111    3540 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0719 03:31:11.531111    3540 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0719 03:31:11.531111    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:11.532678    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:11.532678    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:11.535689    3540 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0719 03:31:11.538866    3540 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0719 03:31:11.538866    3540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0719 03:31:11.539046    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:11.542563    3540 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0719 03:31:11.546367    3540 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0719 03:31:11.547381    3540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0719 03:31:11.547381    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:11.625008    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:11.625008    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:11.629823    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:11.629899    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:11.636724    3540 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 03:31:11.644901    3540 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0719 03:31:11.652976    3540 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 03:31:11.652976    3540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 03:31:11.652976    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:11.659748    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:11.659748    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:11.662620    3540 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0719 03:31:11.662620    3540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0719 03:31:11.662620    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:11.662620    3540 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-811100"
	I0719 03:31:11.663173    3540 host.go:66] Checking if "addons-811100" exists ...
	I0719 03:31:11.665041    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:11.923140    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:11.923140    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:11.931540    3540 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0719 03:31:11.941183    3540 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0719 03:31:11.941183    3540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0719 03:31:11.941183    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:11.945253    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:11.945253    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:11.978141    3540 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0719 03:31:11.985011    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:11.985173    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:12.017394    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:12.019386    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:12.024400    3540 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0719 03:31:12.024613    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:12.032898    3540 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0719 03:31:12.032898    3540 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0719 03:31:12.032898    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:12.032898    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:12.032898    3540 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0719 03:31:12.058516    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:12.037611    3540 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0719 03:31:12.059506    3540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0719 03:31:12.059506    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:12.074789    3540 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 03:31:12.074789    3540 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 03:31:12.074789    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:12.086255    3540 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0719 03:31:12.094239    3540 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0719 03:31:12.105944    3540 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0719 03:31:12.113085    3540 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0719 03:31:12.113085    3540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0719 03:31:12.113085    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:12.187733    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:12.187733    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:12.191473    3540 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0719 03:31:12.195702    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:12.195702    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:12.195702    3540 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0719 03:31:12.207331    3540 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0719 03:31:12.210498    3540 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0719 03:31:12.240287    3540 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0719 03:31:12.248415    3540 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0719 03:31:12.251400    3540 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0719 03:31:12.281492    3540 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0719 03:31:12.295401    3540 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0719 03:31:12.295401    3540 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0719 03:31:12.295401    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:12.300424    3540 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0719 03:31:12.323407    3540 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0719 03:31:12.323407    3540 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0719 03:31:12.323407    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:14.910321    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:17.085269    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:17.873384    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:17.873384    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:17.873384    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:31:17.904380    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:17.904380    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:17.904380    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:31:18.419924    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:18.420762    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:18.426285    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:18.426347    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:18.426404    3540 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 03:31:18.426404    3540 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 03:31:18.426404    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:18.429435    3540 out.go:177]   - Using image docker.io/busybox:stable
	I0719 03:31:18.438586    3540 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0719 03:31:18.445067    3540 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0719 03:31:18.445067    3540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0719 03:31:18.445067    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:18.450070    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:18.452097    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:18.452097    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:31:18.498447    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:18.499192    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:18.499297    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:31:18.589258    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:18.589258    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:18.589258    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:31:18.722599    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:18.722599    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:18.722599    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:31:18.999075    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:18.999075    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:18.999075    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:31:19.214391    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:19.214391    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:19.215369    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:31:19.228630    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:19.228630    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:19.228630    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:31:19.276241    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:19.397948    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:19.397948    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:19.398949    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:31:19.751931    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:19.751931    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:19.751931    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:31:19.772936    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:19.772936    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:19.772936    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:31:20.361580    3540 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0719 03:31:20.361580    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:21.351383    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:23.356030    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:23.763018    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:23.763018    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:23.763018    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:31:25.100141    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:25.100141    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:25.100141    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:31:25.320592    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:31:25.320592    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:25.321597    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:31:25.477096    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:31:25.477096    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:25.478068    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:31:25.537085    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:25.537832    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:25.539792    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:31:25.684008    3540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0719 03:31:25.802121    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:25.913127    3540 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0719 03:31:25.913127    3540 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0719 03:31:26.001619    3540 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0719 03:31:26.001619    3540 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0719 03:31:26.078634    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:31:26.078634    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:26.079763    3540 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0719 03:31:26.079763    3540 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0719 03:31:26.079763    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:31:26.306504    3540 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0719 03:31:26.306504    3540 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0719 03:31:26.435112    3540 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0719 03:31:26.435112    3540 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0719 03:31:26.525141    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:31:26.525141    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:26.526229    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:31:26.575619    3540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0719 03:31:26.655097    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:31:26.655097    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:26.655735    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:31:26.693132    3540 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0719 03:31:26.693132    3540 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0719 03:31:26.768340    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:31:26.768340    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:26.768976    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:31:26.844652    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:31:26.844652    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:26.845699    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:31:26.905717    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:31:26.905785    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:26.906542    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:31:26.941899    3540 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0719 03:31:26.941899    3540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0719 03:31:26.988919    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:31:26.988919    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:26.989687    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:31:27.055280    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:31:27.055280    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:27.055942    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:31:27.070082    3540 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0719 03:31:27.070082    3540 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0719 03:31:27.149388    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:31:27.149388    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:27.150086    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:31:27.175679    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:27.175679    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:27.175679    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:31:27.265251    3540 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0719 03:31:27.265251    3540 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0719 03:31:27.324873    3540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0719 03:31:27.450528    3540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 03:31:27.487233    3540 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0719 03:31:27.487233    3540 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0719 03:31:27.490491    3540 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0719 03:31:27.490491    3540 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0719 03:31:27.502495    3540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0719 03:31:27.547788    3540 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0719 03:31:27.547903    3540 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0719 03:31:27.677935    3540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0719 03:31:27.755910    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:31:27.755910    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:27.756919    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:31:27.769428    3540 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0719 03:31:27.769550    3540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0719 03:31:27.780605    3540 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0719 03:31:27.780605    3540 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0719 03:31:27.863854    3540 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0719 03:31:27.863988    3540 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0719 03:31:27.868584    3540 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0719 03:31:27.868584    3540 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0719 03:31:27.880162    3540 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 03:31:27.880283    3540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0719 03:31:28.038325    3540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0719 03:31:28.053652    3540 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 03:31:28.053730    3540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0719 03:31:28.131060    3540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0719 03:31:28.218429    3540 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0719 03:31:28.218429    3540 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0719 03:31:28.270283    3540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 03:31:28.299558    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:28.313660    3540 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 03:31:28.313660    3540 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 03:31:28.533708    3540 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0719 03:31:28.533764    3540 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0719 03:31:28.544713    3540 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0719 03:31:28.544713    3540 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0719 03:31:28.632272    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:31:28.632272    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:28.632887    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:31:28.833515    3540 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 03:31:28.833515    3540 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 03:31:28.899400    3540 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0719 03:31:28.899400    3540 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0719 03:31:29.131974    3540 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0719 03:31:29.131974    3540 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0719 03:31:29.156731    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:31:29.156731    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:29.157277    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:31:29.203805    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:31:29.203805    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:29.203805    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:31:29.286358    3540 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0719 03:31:29.286358    3540 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0719 03:31:29.306677    3540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 03:31:29.537588    3540 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0719 03:31:29.537588    3540 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0719 03:31:29.613308    3540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0719 03:31:29.918652    3540 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0719 03:31:29.918652    3540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0719 03:31:30.140611    3540 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0719 03:31:30.140611    3540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0719 03:31:30.202804    3540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 03:31:30.246202    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:31:30.246202    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:30.246802    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:31:30.247571    3540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0719 03:31:30.326745    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:30.477099    3540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0719 03:31:30.617173    3540 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0719 03:31:30.617173    3540 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0719 03:31:31.497332    3540 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0719 03:31:31.497408    3540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0719 03:31:32.000529    3540 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0719 03:31:32.473966    3540 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0719 03:31:32.474026    3540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0719 03:31:32.798661    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:33.054506    3540 addons.go:234] Setting addon gcp-auth=true in "addons-811100"
	I0719 03:31:33.054506    3540 host.go:66] Checking if "addons-811100" exists ...
	I0719 03:31:33.056494    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:33.362635    3540 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0719 03:31:33.362635    3540 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0719 03:31:34.367148    3540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0719 03:31:34.851670    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:35.604681    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:35.604681    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:35.617189    3540 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0719 03:31:35.617189    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-811100 ).state
	I0719 03:31:36.909483    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:37.948708    3540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:31:37.948708    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:37.948708    3540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-811100 ).networkadapters[0]).ipaddresses[0]
	I0719 03:31:39.363064    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:39.496184    3540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (12.9204092s)
	I0719 03:31:39.496184    3540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (13.8120096s)
	I0719 03:31:39.496524    3540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (12.1714634s)
	I0719 03:31:39.496524    3540 addons.go:475] Verifying addon ingress=true in "addons-811100"
	I0719 03:31:39.496647    3540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.0459737s)
	I0719 03:31:39.496746    3540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.9941067s)
	I0719 03:31:39.496826    3540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.4583624s)
	I0719 03:31:39.496951    3540 addons.go:475] Verifying addon registry=true in "addons-811100"
	I0719 03:31:39.497022    3540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (11.3658251s)
	I0719 03:31:39.496826    3540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.8186694s)
	I0719 03:31:39.497022    3540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (11.2266038s)
	I0719 03:31:39.497022    3540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.1901235s)
	W0719 03:31:39.502450    3540 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 03:31:39.502450    3540 out.go:177] * Verifying ingress addon...
	I0719 03:31:39.502450    3540 addons.go:475] Verifying addon metrics-server=true in "addons-811100"
	I0719 03:31:39.502450    3540 retry.go:31] will retry after 321.420908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 03:31:39.504903    3540 out.go:177] * Verifying registry addon...
	I0719 03:31:39.511553    3540 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0719 03:31:39.513548    3540 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0719 03:31:39.578553    3540 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0719 03:31:39.578553    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:39.582048    3540 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0719 03:31:39.582106    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:39.841954    3540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 03:31:40.029112    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:40.031004    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:40.525202    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:40.525202    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:40.706738    3540 main.go:141] libmachine: [stdout =====>] : 172.28.164.220
	
	I0719 03:31:40.707730    3540 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:31:40.707914    3540 sshutil.go:53] new ssh client: &{IP:172.28.164.220 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\addons-811100\id_rsa Username:docker}
	I0719 03:31:41.029075    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:41.029823    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:41.549844    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:41.550720    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:41.807860    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:42.094969    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:42.095704    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:42.564929    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:42.565544    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:43.103796    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:43.118976    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:43.525743    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:43.526362    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:43.878855    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:44.027089    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:44.028789    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:44.593739    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:44.594406    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:44.725497    3540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (15.1120072s)
	I0719 03:31:44.725497    3540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (14.5225186s)
	I0719 03:31:44.725497    3540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (14.4777524s)
	I0719 03:31:44.725497    3540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (14.2482273s)
	I0719 03:31:44.728484    3540 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-811100 service yakd-dashboard -n yakd-dashboard
	
	W0719 03:31:44.832049    3540 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0719 03:31:45.054044    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:45.060310    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:45.603012    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:45.603537    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:45.798274    3540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (11.4309888s)
	I0719 03:31:45.798274    3540 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-811100"
	I0719 03:31:45.798274    3540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.9552662s)
	I0719 03:31:45.799016    3540 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (10.1816611s)
	I0719 03:31:45.802351    3540 out.go:177] * Verifying csi-hostpath-driver addon...
	I0719 03:31:45.804843    3540 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0719 03:31:45.807732    3540 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0719 03:31:45.809831    3540 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0719 03:31:45.809831    3540 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0719 03:31:45.810799    3540 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0719 03:31:45.834851    3540 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0719 03:31:45.834851    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:45.907150    3540 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0719 03:31:45.907150    3540 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0719 03:31:45.981996    3540 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 03:31:45.981996    3540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0719 03:31:46.075063    3540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 03:31:46.271164    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:46.276694    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:46.291887    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:46.318521    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:46.527623    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:46.529605    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:46.834199    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:47.042513    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:47.053245    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:47.126730    3540 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.0516549s)
	I0719 03:31:47.141072    3540 addons.go:475] Verifying addon gcp-auth=true in "addons-811100"
	I0719 03:31:47.145622    3540 out.go:177] * Verifying gcp-auth addon...
	I0719 03:31:47.150416    3540 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0719 03:31:47.156322    3540 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0719 03:31:47.349343    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:47.537056    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:47.537056    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:47.818690    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:48.028145    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:48.029463    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:48.294168    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:48.325802    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:48.527895    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:48.529540    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:48.824381    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:49.029740    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:49.031830    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:49.324679    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:49.532234    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:49.533899    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:49.833937    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:50.029113    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:50.042007    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:50.296368    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:50.323860    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:50.528132    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:50.528326    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:50.829826    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:51.031747    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:51.033952    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:51.322003    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:51.523537    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:51.524643    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:51.826155    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:52.032798    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:52.032798    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:52.334444    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:52.525531    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:52.526713    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:52.790812    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:52.824122    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:53.026480    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:53.026480    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:53.967159    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:53.967391    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:53.967391    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:54.164566    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:54.166136    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:54.169187    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:54.799022    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:54.800194    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:54.803788    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:54.810194    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:54.826813    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:55.027518    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:55.031754    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:55.330291    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:55.520701    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:55.521022    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:55.823473    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:56.031333    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:56.038363    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:56.318569    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:56.523839    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:56.526664    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:56.830362    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:57.032704    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:57.032704    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:57.290030    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:57.321541    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:57.527221    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:57.529418    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:57.830947    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:58.035528    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:58.038671    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:58.318501    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:58.525360    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:58.527977    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:58.828973    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:59.161344    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:59.161858    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:59.290424    3540 pod_ready.go:102] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"False"
	I0719 03:31:59.321760    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:59.527501    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:31:59.528494    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:31:59.805089    3540 pod_ready.go:92] pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace has status "Ready":"True"
	I0719 03:31:59.805089    3540 pod_ready.go:81] duration metric: took 50.0278706s for pod "coredns-7db6d8ff4d-ljnfs" in "kube-system" namespace to be "Ready" ...
	I0719 03:31:59.805287    3540 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-vvhpd" in "kube-system" namespace to be "Ready" ...
	I0719 03:31:59.854257    3540 pod_ready.go:92] pod "coredns-7db6d8ff4d-vvhpd" in "kube-system" namespace has status "Ready":"True"
	I0719 03:31:59.854455    3540 pod_ready.go:81] duration metric: took 49.1669ms for pod "coredns-7db6d8ff4d-vvhpd" in "kube-system" namespace to be "Ready" ...
	I0719 03:31:59.854455    3540 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-811100" in "kube-system" namespace to be "Ready" ...
	I0719 03:31:59.858799    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:31:59.863407    3540 pod_ready.go:92] pod "etcd-addons-811100" in "kube-system" namespace has status "Ready":"True"
	I0719 03:31:59.863407    3540 pod_ready.go:81] duration metric: took 8.9519ms for pod "etcd-addons-811100" in "kube-system" namespace to be "Ready" ...
	I0719 03:31:59.863407    3540 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-811100" in "kube-system" namespace to be "Ready" ...
	I0719 03:31:59.869401    3540 pod_ready.go:92] pod "kube-apiserver-addons-811100" in "kube-system" namespace has status "Ready":"True"
	I0719 03:31:59.869401    3540 pod_ready.go:81] duration metric: took 5.9941ms for pod "kube-apiserver-addons-811100" in "kube-system" namespace to be "Ready" ...
	I0719 03:31:59.869401    3540 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-811100" in "kube-system" namespace to be "Ready" ...
	I0719 03:31:59.874396    3540 pod_ready.go:92] pod "kube-controller-manager-addons-811100" in "kube-system" namespace has status "Ready":"True"
	I0719 03:31:59.874396    3540 pod_ready.go:81] duration metric: took 4.9946ms for pod "kube-controller-manager-addons-811100" in "kube-system" namespace to be "Ready" ...
	I0719 03:31:59.874396    3540 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-k7rwl" in "kube-system" namespace to be "Ready" ...
	I0719 03:32:00.037073    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:00.037651    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:00.204568    3540 pod_ready.go:92] pod "kube-proxy-k7rwl" in "kube-system" namespace has status "Ready":"True"
	I0719 03:32:00.204637    3540 pod_ready.go:81] duration metric: took 330.2376ms for pod "kube-proxy-k7rwl" in "kube-system" namespace to be "Ready" ...
	I0719 03:32:00.204637    3540 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-811100" in "kube-system" namespace to be "Ready" ...
	I0719 03:32:00.320277    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:00.524028    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:00.526315    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:00.599789    3540 pod_ready.go:92] pod "kube-scheduler-addons-811100" in "kube-system" namespace has status "Ready":"True"
	I0719 03:32:00.599789    3540 pod_ready.go:81] duration metric: took 395.1471ms for pod "kube-scheduler-addons-811100" in "kube-system" namespace to be "Ready" ...
	I0719 03:32:00.599789    3540 pod_ready.go:38] duration metric: took 52.1685736s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 03:32:00.599789    3540 api_server.go:52] waiting for apiserver process to appear ...
	I0719 03:32:00.610825    3540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 03:32:00.646891    3540 api_server.go:72] duration metric: took 56.4036909s to wait for apiserver process to appear ...
	I0719 03:32:00.647005    3540 api_server.go:88] waiting for apiserver healthz status ...
	I0719 03:32:00.647005    3540 api_server.go:253] Checking apiserver healthz at https://172.28.164.220:8443/healthz ...
	I0719 03:32:00.653128    3540 api_server.go:279] https://172.28.164.220:8443/healthz returned 200:
	ok
	I0719 03:32:00.655308    3540 api_server.go:141] control plane version: v1.30.3
	I0719 03:32:00.655369    3540 api_server.go:131] duration metric: took 8.364ms to wait for apiserver health ...
	I0719 03:32:00.655369    3540 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 03:32:00.813747    3540 system_pods.go:59] 19 kube-system pods found
	I0719 03:32:00.813747    3540 system_pods.go:61] "coredns-7db6d8ff4d-ljnfs" [ea5924c7-c3ce-42d5-a4db-404cde85e6c1] Running
	I0719 03:32:00.813747    3540 system_pods.go:61] "coredns-7db6d8ff4d-vvhpd" [f653b34d-4d6a-49ba-a5f3-8b9aaff706ec] Running
	I0719 03:32:00.813747    3540 system_pods.go:61] "csi-hostpath-attacher-0" [c29ab6ec-c5b0-416a-a2e0-fee0f884feeb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0719 03:32:00.813747    3540 system_pods.go:61] "csi-hostpath-resizer-0" [a039eba2-124d-4175-94d3-a34f54a89c26] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0719 03:32:00.813747    3540 system_pods.go:61] "csi-hostpathplugin-pbjf8" [60f64066-4005-40c3-8696-26b7dd428e80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0719 03:32:00.813747    3540 system_pods.go:61] "etcd-addons-811100" [29e07afa-ea9a-4d7e-81bf-6e5ccf514c9f] Running
	I0719 03:32:00.813747    3540 system_pods.go:61] "kube-apiserver-addons-811100" [c5687c13-8fdb-471f-bf51-8fbec68232c2] Running
	I0719 03:32:00.813747    3540 system_pods.go:61] "kube-controller-manager-addons-811100" [849e663e-eeaf-4917-90d0-06972a5d2e8b] Running
	I0719 03:32:00.813747    3540 system_pods.go:61] "kube-ingress-dns-minikube" [ec9b32d5-0eb4-4bd8-85cc-215f0a01bd05] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0719 03:32:00.814749    3540 system_pods.go:61] "kube-proxy-k7rwl" [daa59f7e-c4a3-4a0e-bd57-516bd911ed34] Running
	I0719 03:32:00.814749    3540 system_pods.go:61] "kube-scheduler-addons-811100" [a7e2ae45-625c-438b-8edb-59be72ea71cd] Running
	I0719 03:32:00.814749    3540 system_pods.go:61] "metrics-server-c59844bb4-k5xqw" [2ffe89c5-d971-4af7-8ab4-31b32d653271] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 03:32:00.814749    3540 system_pods.go:61] "nvidia-device-plugin-daemonset-s468j" [3a6ddfdd-7b14-41e2-9d1d-4f7227d5cbe1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0719 03:32:00.814749    3540 system_pods.go:61] "registry-656c9c8d9c-sqlx6" [57f54d5d-5da9-44a4-8d26-866c53216fc1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0719 03:32:00.814749    3540 system_pods.go:61] "registry-proxy-4z6j6" [0ed772e6-0b8d-489f-aa21-910d5b4fa5d9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0719 03:32:00.814749    3540 system_pods.go:61] "snapshot-controller-745499f584-lprkt" [c8db6906-c386-4e70-a82e-0368b9e1a5d6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 03:32:00.814749    3540 system_pods.go:61] "snapshot-controller-745499f584-x6dxz" [40906daf-a45c-4fca-85ca-0351cbfa5d68] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 03:32:00.814749    3540 system_pods.go:61] "storage-provisioner" [92c80d9f-f513-472a-a808-f55e69140aa4] Running
	I0719 03:32:00.814749    3540 system_pods.go:61] "tiller-deploy-6677d64bcd-9m5qz" [eeffa01d-176d-4218-b5d5-db629ef2b701] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0719 03:32:00.814749    3540 system_pods.go:74] duration metric: took 159.378ms to wait for pod list to return data ...
	I0719 03:32:00.814749    3540 default_sa.go:34] waiting for default service account to be created ...
	I0719 03:32:00.823732    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:00.996476    3540 default_sa.go:45] found service account: "default"
	I0719 03:32:00.996476    3540 default_sa.go:55] duration metric: took 181.725ms for default service account to be created ...
	I0719 03:32:00.996476    3540 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 03:32:01.033248    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:01.034586    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:01.211506    3540 system_pods.go:86] 19 kube-system pods found
	I0719 03:32:01.211506    3540 system_pods.go:89] "coredns-7db6d8ff4d-ljnfs" [ea5924c7-c3ce-42d5-a4db-404cde85e6c1] Running
	I0719 03:32:01.211506    3540 system_pods.go:89] "coredns-7db6d8ff4d-vvhpd" [f653b34d-4d6a-49ba-a5f3-8b9aaff706ec] Running
	I0719 03:32:01.211506    3540 system_pods.go:89] "csi-hostpath-attacher-0" [c29ab6ec-c5b0-416a-a2e0-fee0f884feeb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0719 03:32:01.211506    3540 system_pods.go:89] "csi-hostpath-resizer-0" [a039eba2-124d-4175-94d3-a34f54a89c26] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0719 03:32:01.211506    3540 system_pods.go:89] "csi-hostpathplugin-pbjf8" [60f64066-4005-40c3-8696-26b7dd428e80] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0719 03:32:01.211506    3540 system_pods.go:89] "etcd-addons-811100" [29e07afa-ea9a-4d7e-81bf-6e5ccf514c9f] Running
	I0719 03:32:01.211506    3540 system_pods.go:89] "kube-apiserver-addons-811100" [c5687c13-8fdb-471f-bf51-8fbec68232c2] Running
	I0719 03:32:01.211506    3540 system_pods.go:89] "kube-controller-manager-addons-811100" [849e663e-eeaf-4917-90d0-06972a5d2e8b] Running
	I0719 03:32:01.211506    3540 system_pods.go:89] "kube-ingress-dns-minikube" [ec9b32d5-0eb4-4bd8-85cc-215f0a01bd05] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0719 03:32:01.211506    3540 system_pods.go:89] "kube-proxy-k7rwl" [daa59f7e-c4a3-4a0e-bd57-516bd911ed34] Running
	I0719 03:32:01.211506    3540 system_pods.go:89] "kube-scheduler-addons-811100" [a7e2ae45-625c-438b-8edb-59be72ea71cd] Running
	I0719 03:32:01.211506    3540 system_pods.go:89] "metrics-server-c59844bb4-k5xqw" [2ffe89c5-d971-4af7-8ab4-31b32d653271] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0719 03:32:01.211506    3540 system_pods.go:89] "nvidia-device-plugin-daemonset-s468j" [3a6ddfdd-7b14-41e2-9d1d-4f7227d5cbe1] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0719 03:32:01.211506    3540 system_pods.go:89] "registry-656c9c8d9c-sqlx6" [57f54d5d-5da9-44a4-8d26-866c53216fc1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0719 03:32:01.211506    3540 system_pods.go:89] "registry-proxy-4z6j6" [0ed772e6-0b8d-489f-aa21-910d5b4fa5d9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0719 03:32:01.211506    3540 system_pods.go:89] "snapshot-controller-745499f584-lprkt" [c8db6906-c386-4e70-a82e-0368b9e1a5d6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 03:32:01.211506    3540 system_pods.go:89] "snapshot-controller-745499f584-x6dxz" [40906daf-a45c-4fca-85ca-0351cbfa5d68] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 03:32:01.211506    3540 system_pods.go:89] "storage-provisioner" [92c80d9f-f513-472a-a808-f55e69140aa4] Running
	I0719 03:32:01.211506    3540 system_pods.go:89] "tiller-deploy-6677d64bcd-9m5qz" [eeffa01d-176d-4218-b5d5-db629ef2b701] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0719 03:32:01.212541    3540 system_pods.go:126] duration metric: took 216.0627ms to wait for k8s-apps to be running ...
	I0719 03:32:01.212541    3540 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 03:32:01.223517    3540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 03:32:01.248517    3540 system_svc.go:56] duration metric: took 35.975ms WaitForService to wait for kubelet
	I0719 03:32:01.248806    3540 kubeadm.go:582] duration metric: took 57.0055981s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 03:32:01.248806    3540 node_conditions.go:102] verifying NodePressure condition ...
	I0719 03:32:01.332900    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:01.407042    3540 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 03:32:01.407042    3540 node_conditions.go:123] node cpu capacity is 2
	I0719 03:32:01.407042    3540 node_conditions.go:105] duration metric: took 158.2341ms to run NodePressure ...
	I0719 03:32:01.407042    3540 start.go:241] waiting for startup goroutines ...
	I0719 03:32:01.525639    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:01.525964    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:01.824693    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:02.026747    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:02.028048    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:02.559423    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:02.564892    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:02.565971    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:02.896762    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:03.024721    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:03.026854    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:03.328063    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:03.533171    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:03.533926    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:03.832664    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:04.022796    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:04.032913    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:04.864372    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:04.865109    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:04.870068    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:04.885033    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:05.029730    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:05.032181    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:05.725953    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:05.726304    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:05.728641    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:05.826988    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:06.027337    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:06.028965    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:06.328026    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:06.530145    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:06.532457    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:06.832051    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:07.023078    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:07.025825    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:07.333045    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:07.528898    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:07.530900    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:07.829725    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:08.022208    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:08.024122    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:08.331292    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:08.530955    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:08.531155    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:08.834624    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:09.023889    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:09.024113    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:09.325851    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:09.530860    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:09.537441    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:09.829912    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:10.032644    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:10.034841    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:10.333649    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:10.525590    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:10.529005    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:10.827879    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:11.031016    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:11.039027    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:11.356709    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:11.525457    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:11.530120    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:11.823970    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:12.027691    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:12.029716    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:12.331949    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:12.520036    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:12.525077    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:12.821054    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:13.029967    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:13.030787    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:13.329516    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:13.518862    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:13.523567    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:13.822829    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:14.028145    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:14.028802    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:14.330313    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:14.522650    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:14.527037    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:14.827634    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:15.030013    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:15.030274    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:15.331958    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:15.519379    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:15.525704    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:15.820844    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:16.025421    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:16.025421    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:16.327960    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:16.532543    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:16.532543    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:16.820185    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:17.024990    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:17.027574    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:17.327664    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:17.532946    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:17.533205    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:17.835044    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:18.020781    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:18.025728    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:18.326800    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:18.530656    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:18.530833    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:18.831749    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:19.019138    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:19.024555    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:19.333887    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:19.525321    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:19.525771    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:19.830583    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:20.031996    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:20.032988    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:20.330186    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:20.519397    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:20.524793    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:20.821494    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:21.026544    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:21.027539    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:21.333636    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:21.523521    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:21.523521    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:21.826045    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:22.032651    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:22.033279    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:22.603485    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:22.604226    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:22.607433    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:22.822921    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:23.028841    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:23.030834    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:23.330974    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:23.529731    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:23.533811    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:23.823549    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:24.028391    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:24.033366    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:24.331987    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:24.522560    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:24.522560    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:24.823686    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:25.047606    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:25.048388    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:25.331731    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:25.525358    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:25.527370    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:25.825478    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:26.031272    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:26.032964    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:26.332094    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:26.520686    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:26.525571    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:26.821901    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:27.028888    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:27.029734    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:27.326748    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:27.532771    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:27.534725    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:27.846674    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:28.202878    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:28.205839    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:28.323008    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:28.529767    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:28.529767    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:28.831949    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:29.022744    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:29.023332    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:29.336430    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:29.523173    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:29.523173    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:29.821511    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:30.031881    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:30.034270    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:30.329065    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:30.521033    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:30.524683    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:30.830947    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:31.027202    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:31.034978    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:31.335125    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:31.546853    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:31.546853    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:31.840023    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:32.031089    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:32.033053    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:32.326602    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:32.518328    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:32.523721    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:32.834996    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:33.026657    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:33.028500    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:33.333588    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:33.522478    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:33.533711    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:33.827070    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:34.032093    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:34.032093    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:34.332405    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:34.540424    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:34.540623    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:34.823243    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:35.027823    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:35.027823    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:35.331557    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:35.519407    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:35.525225    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:35.821236    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:36.027738    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:36.027738    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:36.329437    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:36.532804    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:36.535659    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:36.820962    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:37.026839    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:37.028802    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:37.330208    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:37.518236    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:37.524061    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:37.820772    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:38.026630    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:38.027334    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:38.328790    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:38.531600    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:38.532090    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:38.832783    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:39.024356    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:39.025340    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:39.326592    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:39.531859    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:39.533847    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:39.835882    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:40.024837    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:40.024963    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:40.331512    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:40.756432    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:40.756797    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:40.974944    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:41.028364    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:41.028940    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:41.332426    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:41.532481    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:41.532671    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:41.832190    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:42.020644    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:42.025158    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:42.324745    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:42.526806    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:42.526969    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:42.829790    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:43.105464    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:43.105464    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:43.333505    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:43.523599    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:43.523664    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:43.832588    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:44.034348    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0719 03:32:44.034681    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:44.333969    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:44.537980    3540 kapi.go:107] duration metric: took 1m5.0235956s to wait for kubernetes.io/minikube-addons=registry ...
	I0719 03:32:44.539407    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:44.823396    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:45.028999    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:45.333534    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:45.524965    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:45.829925    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:46.019639    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:46.322347    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:46.526447    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:46.831319    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:47.032051    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:47.323277    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:47.525677    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:47.831378    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:48.019277    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:48.448010    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:48.562519    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:48.836440    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:49.022880    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:49.328375    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:49.531955    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:49.825594    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:50.038502    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:50.556608    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:50.556798    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:50.833301    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:51.031882    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:51.361191    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:51.547906    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:51.835067    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:52.021659    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:52.335763    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:52.531439    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:52.834131    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:53.022974    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:53.325124    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:53.529703    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:53.835355    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:54.023989    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:54.328965    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:54.531590    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:54.818693    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:55.024386    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:55.708747    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:55.760549    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:55.825882    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:56.027686    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:56.333677    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:56.522577    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:56.833454    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:57.023412    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:57.327878    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:57.531715    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:57.834863    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:58.022196    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:58.329934    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:58.532558    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:58.835806    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:59.030667    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:59.329568    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:32:59.529409    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:32:59.833308    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:00.023747    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:00.326433    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:00.530918    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:00.833914    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:01.023162    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:01.567017    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:01.567606    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:01.827349    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:02.038162    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:02.330972    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:02.520705    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:02.829743    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:03.028390    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:03.330659    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:03.537557    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:03.836485    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:04.023496    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:04.325419    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:04.527762    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:04.843220    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:05.020322    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:05.333813    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:05.519692    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:05.834276    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:06.022255    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:06.321836    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:06.526652    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:06.831287    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:07.043544    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:07.321410    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:07.521660    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:07.823676    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:08.023461    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:08.321676    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:08.525953    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:08.825849    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:09.032833    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:09.335706    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:09.522950    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:09.826601    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:10.032384    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:10.548844    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:10.554353    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:10.832661    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:11.020153    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:11.324212    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:11.534326    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:11.832966    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:12.069970    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:12.325643    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:12.525131    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:12.839912    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:13.033143    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:13.379487    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:13.519065    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:13.821559    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:14.028222    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:14.340633    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:14.534237    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:14.820812    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:15.027292    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:15.331413    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:15.522856    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:15.824293    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:16.028589    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:16.334067    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:16.522602    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:16.825767    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:17.029243    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:17.329625    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:17.524314    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:17.827619    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:18.028069    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:18.332220    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:18.525213    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:18.957010    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:19.125060    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:19.335101    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:19.521472    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:19.826504    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:20.366757    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:20.369232    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:21.404855    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:21.410855    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:21.417230    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:21.422559    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:21.547169    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:21.831234    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:22.031425    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:22.339974    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:22.525230    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:22.833427    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:23.033916    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:23.320851    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:23.527025    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:23.831438    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:24.033907    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:24.322940    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:24.524627    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:24.828004    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:25.053758    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:25.335067    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:25.521517    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:25.829251    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:26.030045    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:26.320499    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:26.523067    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:26.827328    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:27.025282    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:27.325402    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:27.518558    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:27.830235    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:28.033773    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:28.400137    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:28.527627    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:28.842742    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:29.032806    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:29.335967    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:29.525640    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:29.825056    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:30.029934    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:30.335439    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:30.523062    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:30.830321    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:31.028493    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:31.332795    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:31.532515    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:31.829394    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:32.033891    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:32.319577    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:32.524942    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:32.826030    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:33.036020    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:33.337525    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:33.523120    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:33.826240    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:34.369020    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:34.375872    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:34.547414    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:34.828652    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:35.030760    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:35.334555    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:35.526179    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:35.850820    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:36.029351    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:36.332402    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:36.520977    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:36.830061    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:37.029943    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:37.351398    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:37.524638    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:37.821503    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:38.021679    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:38.349632    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:38.529112    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:38.828197    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:39.031878    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:39.336263    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:39.524794    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:39.828728    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:40.018500    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:40.320354    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:40.526583    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:40.830729    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:41.033764    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:41.353555    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:41.526653    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:41.825825    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:42.023659    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:42.323531    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:42.526120    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:42.830169    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:43.028775    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:43.327251    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:43.530962    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:43.845697    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:44.033644    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:44.333591    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:44.532044    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:44.830546    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:45.029973    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:45.332257    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:45.529447    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:45.831682    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:46.032258    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:46.334759    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:46.523884    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:46.833209    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:47.032993    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:47.722067    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:47.731821    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:47.826045    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:48.032245    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:48.333321    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:48.534328    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:48.837123    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:49.033395    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:49.331483    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:49.533761    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:49.835408    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:50.024749    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:50.326811    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:50.527702    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:50.830880    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:51.034267    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:51.337795    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:51.520344    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:51.821154    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:52.023202    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:52.327135    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:52.527095    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:52.830151    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:53.038207    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:53.341001    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:53.522314    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:53.827349    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:54.244155    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:54.331180    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:54.520362    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:54.823135    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:55.029796    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:55.404891    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:55.519986    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:55.823550    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:56.044922    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:56.333615    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:56.527997    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:56.829647    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:57.027109    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:57.330166    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:57.520203    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:57.824524    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:58.031841    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:58.327589    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:58.530195    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:58.841849    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:59.033039    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:59.323157    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:33:59.524328    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:33:59.828631    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:00.052497    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:00.335721    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:00.522067    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:00.820551    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:01.023824    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:01.327595    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:01.530236    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:01.833292    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:02.023358    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:02.327713    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:02.531076    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:02.853947    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:03.021564    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:03.327699    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:03.526609    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:03.833773    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:04.033725    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:04.332214    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:04.527909    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:04.829103    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:05.751907    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:05.752915    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:05.757906    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:05.831287    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:06.033834    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:06.328482    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:06.532358    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:06.834381    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:07.019900    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:07.326799    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:07.531149    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:07.833706    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:08.023774    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:08.324609    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:08.527257    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:08.832675    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:09.036567    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:09.321388    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:09.525072    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:09.831657    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:10.033725    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:10.333048    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:10.520421    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:10.824361    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:11.026976    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:11.330917    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:11.530449    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:11.827896    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:12.025184    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:12.324168    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:12.527477    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:12.840179    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:13.030181    3540 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0719 03:34:13.387552    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:13.526206    3540 kapi.go:107] duration metric: took 2m34.0128048s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0719 03:34:13.828817    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:14.333934    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:14.820558    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:15.331244    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:15.829309    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:16.331187    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:16.821960    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:17.341544    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:17.826353    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:18.334376    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:18.831284    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:19.324148    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:19.828046    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:20.336598    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:20.832605    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:21.325714    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:21.824575    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:22.322679    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:22.827903    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:23.325963    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:23.827133    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:24.331549    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:24.826509    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:25.322081    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:25.830384    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:26.335271    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:26.828891    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:27.332789    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:27.829804    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:28.335579    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:28.833827    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:29.335916    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:29.833779    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:30.327004    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:30.838043    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:31.327291    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:31.835026    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:32.371773    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:32.672260    3540 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0719 03:34:32.672260    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:32.837442    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0719 03:34:33.172588    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:33.327964    3540 kapi.go:107] duration metric: took 2m47.515155s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0719 03:34:33.666862    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:34.168723    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:34.662088    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:35.165394    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:35.664570    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:36.164720    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:36.663980    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:37.167387    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:37.666234    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:38.167129    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:38.667533    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:39.170472    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:39.662778    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:40.171564    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:40.663228    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:41.166539    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:41.667461    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:42.168922    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:42.673409    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:43.170435    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:43.670132    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:44.169734    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:44.673395    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:45.158579    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:45.657988    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:46.171869    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:46.658212    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:47.160593    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:47.659023    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:48.173283    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:48.673075    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:49.159074    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:49.672532    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:50.158879    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:50.667858    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:51.160899    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:51.671531    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:52.160191    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:52.660288    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:53.165436    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:53.669302    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:54.158459    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:54.664752    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:55.173802    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:55.673864    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:56.159961    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:56.659111    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:57.161501    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:57.661192    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:58.165863    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:58.666129    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:59.167094    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:34:59.669595    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:35:00.170818    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:35:00.669068    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:35:01.165666    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:35:01.669220    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:35:02.170438    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:35:02.668576    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:35:03.171800    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:35:03.671312    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:35:04.159719    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:35:04.660665    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:35:05.164895    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:35:05.665167    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:35:06.168630    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:35:06.662681    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:35:07.168904    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:35:07.661365    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:35:08.598967    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:35:08.672248    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:35:09.171641    3540 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 03:35:09.678348    3540 kapi.go:107] duration metric: took 3m22.5254501s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0719 03:35:09.680773    3540 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-811100 cluster.
	I0719 03:35:09.688795    3540 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0719 03:35:09.701887    3540 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0719 03:35:09.709270    3540 out.go:177] * Enabled addons: cloud-spanner, inspektor-gadget, storage-provisioner, nvidia-device-plugin, helm-tiller, ingress-dns, metrics-server, volcano, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0719 03:35:09.714525    3540 addons.go:510] duration metric: took 4m5.4687073s for enable addons: enabled=[cloud-spanner inspektor-gadget storage-provisioner nvidia-device-plugin helm-tiller ingress-dns metrics-server volcano yakd storage-provisioner-rancher volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0719 03:35:09.714557    3540 start.go:246] waiting for cluster config update ...
	I0719 03:35:09.714557    3540 start.go:255] writing updated cluster config ...
	I0719 03:35:09.731169    3540 ssh_runner.go:195] Run: rm -f paused
	I0719 03:35:09.989356    3540 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 03:35:09.994527    3540 out.go:177] * Done! kubectl is now configured to use "addons-811100" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 19 03:35:44 addons-811100 dockerd[1433]: time="2024-07-19T03:35:44.938150144Z" level=warning msg="cleaning up after shim disconnected" id=195327c50db35b5ee8ee2945f26729c0fba2ee195858c4b85712455b2c99f226 namespace=moby
	Jul 19 03:35:44 addons-811100 dockerd[1433]: time="2024-07-19T03:35:44.938209644Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:35:45 addons-811100 dockerd[1427]: time="2024-07-19T03:35:45.153731546Z" level=info msg="ignoring event" container=68272affe7a59a379751935a0fca6d420a281aedb1d63a1e695f00096c467fe5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:35:45 addons-811100 dockerd[1433]: time="2024-07-19T03:35:45.155796744Z" level=info msg="shim disconnected" id=68272affe7a59a379751935a0fca6d420a281aedb1d63a1e695f00096c467fe5 namespace=moby
	Jul 19 03:35:45 addons-811100 dockerd[1433]: time="2024-07-19T03:35:45.156114244Z" level=warning msg="cleaning up after shim disconnected" id=68272affe7a59a379751935a0fca6d420a281aedb1d63a1e695f00096c467fe5 namespace=moby
	Jul 19 03:35:45 addons-811100 dockerd[1433]: time="2024-07-19T03:35:45.156136344Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:35:45 addons-811100 cri-dockerd[1325]: time="2024-07-19T03:35:45Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-proxy-4z6j6_kube-system\": unexpected command output nsenter: cannot open /proc/3758/ns/net: No such file or directory\n with error: exit status 1"
	Jul 19 03:35:45 addons-811100 dockerd[1427]: time="2024-07-19T03:35:45.384335135Z" level=info msg="ignoring event" container=86c43aa358f283e8c4766ca09c68a35bb2dd87d266a326e1a37a781eb89670e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:35:45 addons-811100 dockerd[1433]: time="2024-07-19T03:35:45.385550533Z" level=info msg="shim disconnected" id=86c43aa358f283e8c4766ca09c68a35bb2dd87d266a326e1a37a781eb89670e1 namespace=moby
	Jul 19 03:35:45 addons-811100 dockerd[1433]: time="2024-07-19T03:35:45.385802033Z" level=warning msg="cleaning up after shim disconnected" id=86c43aa358f283e8c4766ca09c68a35bb2dd87d266a326e1a37a781eb89670e1 namespace=moby
	Jul 19 03:35:45 addons-811100 dockerd[1433]: time="2024-07-19T03:35:45.385870333Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:35:52 addons-811100 cri-dockerd[1325]: time="2024-07-19T03:35:52Z" level=error msg="error getting RW layer size for container ID '195327c50db35b5ee8ee2945f26729c0fba2ee195858c4b85712455b2c99f226': Error response from daemon: No such container: 195327c50db35b5ee8ee2945f26729c0fba2ee195858c4b85712455b2c99f226"
	Jul 19 03:35:52 addons-811100 cri-dockerd[1325]: time="2024-07-19T03:35:52Z" level=error msg="Set backoffDuration to : 1m0s for container ID '195327c50db35b5ee8ee2945f26729c0fba2ee195858c4b85712455b2c99f226'"
	Jul 19 03:35:54 addons-811100 dockerd[1427]: time="2024-07-19T03:35:54.312696812Z" level=info msg="ignoring event" container=820a42eaf50197dd752889c7fa1ea02849ea57e56e17540ae70dcd6db6d92d09 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:35:54 addons-811100 dockerd[1433]: time="2024-07-19T03:35:54.313481912Z" level=info msg="shim disconnected" id=820a42eaf50197dd752889c7fa1ea02849ea57e56e17540ae70dcd6db6d92d09 namespace=moby
	Jul 19 03:35:54 addons-811100 dockerd[1433]: time="2024-07-19T03:35:54.314118911Z" level=warning msg="cleaning up after shim disconnected" id=820a42eaf50197dd752889c7fa1ea02849ea57e56e17540ae70dcd6db6d92d09 namespace=moby
	Jul 19 03:35:54 addons-811100 dockerd[1433]: time="2024-07-19T03:35:54.314295411Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:35:54 addons-811100 dockerd[1427]: time="2024-07-19T03:35:54.535482979Z" level=info msg="ignoring event" container=ce310a1809f0ea6b02697130df89c99fb6700cdd25ad1f79100bec20a11cf077 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:35:54 addons-811100 dockerd[1433]: time="2024-07-19T03:35:54.536462678Z" level=info msg="shim disconnected" id=ce310a1809f0ea6b02697130df89c99fb6700cdd25ad1f79100bec20a11cf077 namespace=moby
	Jul 19 03:35:54 addons-811100 dockerd[1433]: time="2024-07-19T03:35:54.537702377Z" level=warning msg="cleaning up after shim disconnected" id=ce310a1809f0ea6b02697130df89c99fb6700cdd25ad1f79100bec20a11cf077 namespace=moby
	Jul 19 03:35:54 addons-811100 dockerd[1433]: time="2024-07-19T03:35:54.537727577Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:35:54 addons-811100 cri-dockerd[1325]: time="2024-07-19T03:35:54Z" level=error msg="error getting RW layer size for container ID '2b9b1d4daa9d2a62de65f501d4981c4e30ad85047074720890673ee8dd5a9b65': Error response from daemon: No such container: 2b9b1d4daa9d2a62de65f501d4981c4e30ad85047074720890673ee8dd5a9b65"
	Jul 19 03:35:54 addons-811100 cri-dockerd[1325]: time="2024-07-19T03:35:54Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2b9b1d4daa9d2a62de65f501d4981c4e30ad85047074720890673ee8dd5a9b65'"
	Jul 19 03:35:54 addons-811100 cri-dockerd[1325]: time="2024-07-19T03:35:54Z" level=error msg="error getting RW layer size for container ID '40f9953991a0cd1210c1c911762580df026ada5e8166e4691eafef26fe56f12a': Error response from daemon: No such container: 40f9953991a0cd1210c1c911762580df026ada5e8166e4691eafef26fe56f12a"
	Jul 19 03:35:54 addons-811100 cri-dockerd[1325]: time="2024-07-19T03:35:54Z" level=error msg="Set backoffDuration to : 1m0s for container ID '40f9953991a0cd1210c1c911762580df026ada5e8166e4691eafef26fe56f12a'"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	cd5b3f569a1dc       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:bda802dd37a41ba160bf10134538fd1a1ce05efcc14ab4c38b5f6b1e6dccd734                            40 seconds ago       Exited              gadget                                   4                   e2e556a470842       gadget-bkjvc
	a6f75fc35eaa1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 58 seconds ago       Running             gcp-auth                                 0                   688a616faf4a4       gcp-auth-5db96cd9b4-q68h8
	a44fd08caba6d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   1d9e62eff54eb       csi-hostpathplugin-pbjf8
	d87167ed2701f       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   1d9e62eff54eb       csi-hostpathplugin-pbjf8
	c229abfef2d47       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   1d9e62eff54eb       csi-hostpathplugin-pbjf8
	51934ee892e64       volcanosh/vc-webhook-manager@sha256:31e8c7adc6859e582b8edd053e2e926409bcfd1bf39e3a10d05949f7738144c4                                         About a minute ago   Running             admission                                0                   d3985db1a503c       volcano-admission-5f7844f7bc-mzdsg
	74d32298cc2f3       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   1d9e62eff54eb       csi-hostpathplugin-pbjf8
	7989d42cfa720       registry.k8s.io/ingress-nginx/controller@sha256:e6439a12b52076965928e83b7b56aae6731231677b01e81818bce7fa5c60161a                             About a minute ago   Running             controller                               0                   04a5d99c36552       ingress-nginx-controller-6d9bd977d4-gkvlk
	d6b2db84772fb       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                2 minutes ago        Running             node-driver-registrar                    0                   1d9e62eff54eb       csi-hostpathplugin-pbjf8
	a23f6bf9d1d60       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              2 minutes ago        Running             csi-resizer                              0                   c2527c56349b8       csi-hostpath-resizer-0
	f6ab41555b400       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   2 minutes ago        Running             csi-external-health-monitor-controller   0                   1d9e62eff54eb       csi-hostpathplugin-pbjf8
	d1eb18a7e53b3       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             2 minutes ago        Running             csi-attacher                             0                   fd7ebe08453d5       csi-hostpath-attacher-0
	e986576eab9bc       volcanosh/vc-scheduler@sha256:1ebc36090a981cb8bd703f9e9842f8e0a53ef6bf9034d51defc1ea689f38a60f                                               2 minutes ago        Running             volcano-scheduler                        0                   391be71b626d6       volcano-scheduler-844f6db89b-csf64
	f2abd3c9b2a11       volcanosh/vc-controller-manager@sha256:d1337c3af008318577ca718a7f35b75cefc1071a35749c4f9430035abd4fbc93                                      2 minutes ago        Running             volcano-controllers                      0                   a6d2dbafebcb8       volcano-controllers-59cb4746db-kd8dh
	36ee060cfe17e       684c5ea3b61b2                                                                                                                                2 minutes ago        Exited              patch                                    1                   76906e0b308fc       ingress-nginx-admission-patch-gqp26
	a37570f623136       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   2 minutes ago        Exited              create                                   0                   3c7b1e7e7fae2       ingress-nginx-admission-create-krvbh
	24806a307b37b       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                                        2 minutes ago        Running             yakd                                     0                   f3d8681bb11b3       yakd-dashboard-799879c74f-6rvvr
	16cadcecb77a4       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   155d752ccdb2f       snapshot-controller-745499f584-x6dxz
	9649df0a4819b       registry.k8s.io/metrics-server/metrics-server@sha256:db3800085a0957083930c3932b17580eec652cfb6156a05c0f79c7543e80d17a                        2 minutes ago        Running             metrics-server                           0                   5bb7bc964f822       metrics-server-c59844bb4-k5xqw
	3ee422e70a0e5       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                                       3 minutes ago        Running             local-path-provisioner                   0                   acda6772d2d83       local-path-provisioner-8d985888d-w9w6w
	02adcb4a91801       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      3 minutes ago        Running             volume-snapshot-controller               0                   6d8bbe3c956aa       snapshot-controller-745499f584-lprkt
	f2d9dfeb4e084       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  3 minutes ago        Running             tiller                                   0                   9b4e63525dde5       tiller-deploy-6677d64bcd-9m5qz
	50bf2d3643cc7       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             3 minutes ago        Running             minikube-ingress-dns                     0                   3f9a39600add3       kube-ingress-dns-minikube
	0fdcb56b0bbbf       6e38f40d628db                                                                                                                                4 minutes ago        Running             storage-provisioner                      0                   38d2001f1cfd9       storage-provisioner
	8b52f0d6c9ddd       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   63731b56972d9       coredns-7db6d8ff4d-vvhpd
	f6566bd7ecb0c       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   a13a477eb4c23       coredns-7db6d8ff4d-ljnfs
	0389aa9fec2f3       55bb025d2cfa5                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   b614a5d32bf92       kube-proxy-k7rwl
	8c22e2f1dfd5c       3861cfcd7c04c                                                                                                                                5 minutes ago        Running             etcd                                     0                   9363b2e78af84       etcd-addons-811100
	ecf3f877ac4e4       1f6d574d502f3                                                                                                                                5 minutes ago        Running             kube-apiserver                           0                   3d1c5f82751fa       kube-apiserver-addons-811100
	824d910ccb59c       76932a3b37d7e                                                                                                                                5 minutes ago        Running             kube-controller-manager                  0                   af444753702d6       kube-controller-manager-addons-811100
	f1edaa2e4e4ea       3edc18e7b7672                                                                                                                                5 minutes ago        Running             kube-scheduler                           0                   7e98721a8574d       kube-scheduler-addons-811100
	
	
	==> controller_ingress [7989d42cfa72] <==
	W0719 03:34:12.706403       6 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0719 03:34:12.706952       6 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0719 03:34:12.715428       6 main.go:248] "Running in Kubernetes cluster" major="1" minor="30" git="v1.30.3" state="clean" commit="6fc0a69044f1ac4c13841ec4391224a2df241460" platform="linux/amd64"
	I0719 03:34:13.002118       6 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0719 03:34:13.074824       6 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0719 03:34:13.116339       6 nginx.go:271] "Starting NGINX Ingress controller"
	I0719 03:34:13.183685       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"622f4529-6986-484a-a560-ce48eb891d11", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0719 03:34:13.193294       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"d0386a58-7e2b-4454-b861-61adf69a0ba7", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0719 03:34:13.193358       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"bd03685d-2ec6-49e4-9548-be170a0dba2b", APIVersion:"v1", ResourceVersion:"674", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0719 03:34:14.318313       6 nginx.go:317] "Starting NGINX process"
	I0719 03:34:14.318797       6 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0719 03:34:14.318969       6 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0719 03:34:14.319478       6 controller.go:193] "Configuration changes detected, backend reload required"
	I0719 03:34:14.346539       6 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0719 03:34:14.346930       6 status.go:85] "New leader elected" identity="ingress-nginx-controller-6d9bd977d4-gkvlk"
	I0719 03:34:14.351221       6 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-6d9bd977d4-gkvlk" node="addons-811100"
	I0719 03:34:14.382539       6 controller.go:213] "Backend successfully reloaded"
	I0719 03:34:14.382861       6 controller.go:224] "Initial sync, sleeping for 1 second"
	I0719 03:34:14.383387       6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-6d9bd977d4-gkvlk", UID:"e992f3f9-6f47-4b15-878e-42e9ce442f51", APIVersion:"v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         7c44f992012555ff7f4e47c08d7c542ca9b4b1f7
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [8b52f0d6c9dd] <==
	[INFO] plugin/kubernetes: Trace[1459263221]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 03:31:19.309) (total time: 30007ms):
	Trace[1459263221]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30004ms (03:31:49.313)
	Trace[1459263221]: [30.007543497s] [30.007543497s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2107158842]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 03:31:19.310) (total time: 30006ms):
	Trace[2107158842]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30005ms (03:31:49.315)
	Trace[2107158842]: [30.006958899s] [30.006958899s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[186236535]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 03:31:19.308) (total time: 30008ms):
	Trace[186236535]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:31:49.309)
	Trace[186236535]: [30.008749694s] [30.008749694s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] 10.244.0.7:45797 - 28884 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000634097s
	[INFO] 10.244.0.7:45797 - 23760 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000404898s
	[INFO] 10.244.0.7:45069 - 52629 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0001359s
	[INFO] 10.244.0.7:45069 - 42903 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000334199s
	[INFO] 10.244.0.7:42726 - 27272 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000122299s
	[INFO] 10.244.0.7:42726 - 46222 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0000886s
	[INFO] 10.244.0.7:44626 - 51052 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0001665s
	[INFO] 10.244.0.7:44626 - 54626 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.0000503s
	[INFO] 10.244.0.26:38405 - 23390 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000494199s
	[INFO] 10.244.0.26:50913 - 2881 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000188699s
	[INFO] 10.244.0.26:46216 - 41753 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.001310498s
	[INFO] 10.244.0.28:33872 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000345699s
	
	
	==> coredns [f6566bd7ecb0] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1966254654]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 03:31:19.103) (total time: 30001ms):
	Trace[1966254654]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (03:31:49.104)
	Trace[1966254654]: [30.001645715s] [30.001645715s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1548996475]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (19-Jul-2024 03:31:19.104) (total time: 30001ms):
	Trace[1548996475]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (03:31:49.105)
	Trace[1548996475]: [30.001824913s] [30.001824913s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 10.244.0.7:41409 - 12087 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000173299s
	[INFO] 10.244.0.7:41409 - 17717 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000595297s
	[INFO] 10.244.0.7:50303 - 37639 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000093999s
	[INFO] 10.244.0.7:50303 - 9497 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000796s
	[INFO] 10.244.0.7:44064 - 10052 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000162399s
	[INFO] 10.244.0.7:44064 - 36166 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.0000843s
	[INFO] 10.244.0.7:58473 - 13841 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001434s
	[INFO] 10.244.0.7:58473 - 57630 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0000502s
	[INFO] 10.244.0.26:46564 - 39590 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000430499s
	[INFO] 10.244.0.26:40962 - 11849 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000341799s
	[INFO] 10.244.0.26:42529 - 5051 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001517s
	[INFO] 10.244.0.26:33924 - 48442 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0001001s
	[INFO] 10.244.0.26:38612 - 46471 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 192 0.001604098s
	[INFO] 10.244.0.28:44290 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000321599s
	
	
	==> describe nodes <==
	Name:               addons-811100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-811100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=addons-811100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T03_30_51_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-811100
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-811100"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 03:30:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-811100
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 03:36:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 03:35:57 +0000   Fri, 19 Jul 2024 03:30:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 03:35:57 +0000   Fri, 19 Jul 2024 03:30:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 03:35:57 +0000   Fri, 19 Jul 2024 03:30:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 03:35:57 +0000   Fri, 19 Jul 2024 03:30:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.164.220
	  Hostname:    addons-811100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 d46a152c6f1a4a898dbd5d7a297d3042
	  System UUID:                4a5d6218-bfc3-904e-9e7c-979dc2869ede
	  Boot ID:                    b6aa2d2c-4435-4841-ab5c-0178f95d2ca4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (24 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  gadget                      gadget-bkjvc                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  gcp-auth                    gcp-auth-5db96cd9b4-q68h8                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  ingress-nginx               ingress-nginx-controller-6d9bd977d4-gkvlk    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         4m28s
	  kube-system                 coredns-7db6d8ff4d-ljnfs                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m
	  kube-system                 coredns-7db6d8ff4d-vvhpd                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m1s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 csi-hostpathplugin-pbjf8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 etcd-addons-811100                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-apiserver-addons-811100                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-controller-manager-addons-811100        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-proxy-k7rwl                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-scheduler-addons-811100                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 metrics-server-c59844bb4-k5xqw               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m30s
	  kube-system                 snapshot-controller-745499f584-lprkt         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 snapshot-controller-745499f584-x6dxz         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 tiller-deploy-6677d64bcd-9m5qz               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  local-path-storage          local-path-provisioner-8d985888d-w9w6w       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	  volcano-system              volcano-admission-5f7844f7bc-mzdsg           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  volcano-system              volcano-controllers-59cb4746db-kd8dh         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  volcano-system              volcano-scheduler-844f6db89b-csf64           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m24s
	  yakd-dashboard              yakd-dashboard-799879c74f-6rvvr              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%!)(MISSING)  0 (0%!)(MISSING)
	  memory             658Mi (17%!)(MISSING)  596Mi (15%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m49s  kube-proxy       
	  Normal  Starting                 5m17s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m17s  kubelet          Node addons-811100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m17s  kubelet          Node addons-811100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m17s  kubelet          Node addons-811100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m13s  kubelet          Node addons-811100 status is now: NodeReady
	  Normal  RegisteredNode           5m3s   node-controller  Node addons-811100 event: Registered Node addons-811100 in Controller
	
	
	==> dmesg <==
	[Jul19 03:31] systemd-fstab-generator[2506]: Ignoring "noauto" option for root device
	[  +0.591403] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.583325] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.935854] kauditd_printk_skb: 28 callbacks suppressed
	[ +10.928477] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.644938] kauditd_printk_skb: 32 callbacks suppressed
	[  +5.033333] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.025536] kauditd_printk_skb: 83 callbacks suppressed
	[ +14.764537] kauditd_printk_skb: 71 callbacks suppressed
	[Jul19 03:32] kauditd_printk_skb: 2 callbacks suppressed
	[ +27.296564] kauditd_printk_skb: 31 callbacks suppressed
	[Jul19 03:33] kauditd_printk_skb: 4 callbacks suppressed
	[ +13.190859] kauditd_printk_skb: 22 callbacks suppressed
	[  +7.494723] kauditd_printk_skb: 10 callbacks suppressed
	[ +10.665289] hrtimer: interrupt took 1708802 ns
	[Jul19 03:34] kauditd_printk_skb: 54 callbacks suppressed
	[ +12.105308] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.006186] kauditd_printk_skb: 16 callbacks suppressed
	[  +6.256229] kauditd_printk_skb: 26 callbacks suppressed
	[ +26.760998] kauditd_printk_skb: 24 callbacks suppressed
	[Jul19 03:35] kauditd_printk_skb: 40 callbacks suppressed
	[ +13.117012] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.743659] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.100880] kauditd_printk_skb: 61 callbacks suppressed
	[ +25.473087] kauditd_printk_skb: 39 callbacks suppressed
	
	
	==> etcd [8c22e2f1dfd5] <==
	{"level":"warn","ts":"2024-07-19T03:33:55.402539Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.266407ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-07-19T03:33:55.403549Z","caller":"traceutil/trace.go:171","msg":"trace[447984317] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1288; }","duration":"107.290708ms","start":"2024-07-19T03:33:55.296215Z","end":"2024-07-19T03:33:55.403505Z","steps":["trace[447984317] 'range keys from in-memory index tree'  (duration: 106.118807ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:33:55.402531Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.279626ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T03:33:55.40399Z","caller":"traceutil/trace.go:171","msg":"trace[1619630022] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:0; response_revision:1288; }","duration":"225.753927ms","start":"2024-07-19T03:33:55.178227Z","end":"2024-07-19T03:33:55.403981Z","steps":["trace[1619630022] 'range keys from in-memory index tree'  (duration: 224.179225ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:34:05.540838Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":2676986221661949977,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-07-19T03:34:05.754715Z","caller":"traceutil/trace.go:171","msg":"trace[2016734982] linearizableReadLoop","detail":"{readStateIndex:1377; appliedIndex:1376; }","duration":"714.168168ms","start":"2024-07-19T03:34:05.040529Z","end":"2024-07-19T03:34:05.754697Z","steps":["trace[2016734982] 'read index received'  (duration: 656.996778ms)","trace[2016734982] 'applied index is now lower than readState.Index'  (duration: 57.17059ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T03:34:05.754862Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"589.841361ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-07-19T03:34:05.754896Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"714.358665ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:14470"}
	{"level":"info","ts":"2024-07-19T03:34:05.754902Z","caller":"traceutil/trace.go:171","msg":"trace[36735049] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:0; response_revision:1320; }","duration":"589.89366ms","start":"2024-07-19T03:34:05.164998Z","end":"2024-07-19T03:34:05.754891Z","steps":["trace[36735049] 'agreement among raft nodes before linearized reading'  (duration: 589.822761ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T03:34:05.754922Z","caller":"traceutil/trace.go:171","msg":"trace[996961264] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1320; }","duration":"714.414765ms","start":"2024-07-19T03:34:05.040499Z","end":"2024-07-19T03:34:05.754914Z","steps":["trace[996961264] 'agreement among raft nodes before linearized reading'  (duration: 714.291566ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:34:05.754931Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:34:05.164958Z","time spent":"589.96456ms","remote":"127.0.0.1:57268","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":29,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-07-19T03:34:05.754941Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:34:05.040419Z","time spent":"714.516765ms","remote":"127.0.0.1:57268","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":14494,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"warn","ts":"2024-07-19T03:34:05.75513Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"416.468106ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:19 size:91261"}
	{"level":"info","ts":"2024-07-19T03:34:05.75515Z","caller":"traceutil/trace.go:171","msg":"trace[830129237] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:19; response_revision:1320; }","duration":"416.515106ms","start":"2024-07-19T03:34:05.338629Z","end":"2024-07-19T03:34:05.755144Z","steps":["trace[830129237] 'agreement among raft nodes before linearized reading'  (duration: 416.377407ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:34:05.755166Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:34:05.338566Z","time spent":"416.595706ms","remote":"127.0.0.1:57268","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":19,"response size":91285,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2024-07-19T03:34:26.307084Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"355.477319ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-19T03:34:26.307491Z","caller":"traceutil/trace.go:171","msg":"trace[404375910] range","detail":"{range_begin:/registry/controllerrevisions/; range_end:/registry/controllerrevisions0; response_count:0; response_revision:1406; }","duration":"355.971715ms","start":"2024-07-19T03:34:25.951504Z","end":"2024-07-19T03:34:26.307475Z","steps":["trace[404375910] 'count revisions from in-memory index tree'  (duration: 355.33212ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:34:26.307695Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:34:25.951487Z","time spent":"356.195013ms","remote":"127.0.0.1:57594","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":7,"response size":31,"request content":"key:\"/registry/controllerrevisions/\" range_end:\"/registry/controllerrevisions0\" count_only:true "}
	{"level":"warn","ts":"2024-07-19T03:34:26.308219Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.426853ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T03:34:26.308308Z","caller":"traceutil/trace.go:171","msg":"trace[1121895518] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:0; response_revision:1406; }","duration":"130.612752ms","start":"2024-07-19T03:34:26.177687Z","end":"2024-07-19T03:34:26.308299Z","steps":["trace[1121895518] 'range keys from in-memory index tree'  (duration: 130.337553ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:34:26.307154Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.249335ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T03:34:26.311263Z","caller":"traceutil/trace.go:171","msg":"trace[1796945063] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1406; }","duration":"233.327205ms","start":"2024-07-19T03:34:26.07789Z","end":"2024-07-19T03:34:26.311218Z","steps":["trace[1796945063] 'range keys from in-memory index tree'  (duration: 229.101936ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:35:08.600325Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"425.456345ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:11509"}
	{"level":"info","ts":"2024-07-19T03:35:08.600804Z","caller":"traceutil/trace.go:171","msg":"trace[2039913453] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1530; }","duration":"426.096144ms","start":"2024-07-19T03:35:08.174643Z","end":"2024-07-19T03:35:08.600739Z","steps":["trace[2039913453] 'range keys from in-memory index tree'  (duration: 425.319446ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T03:35:08.600955Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T03:35:08.174587Z","time spent":"426.346644ms","remote":"127.0.0.1:57268","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":11533,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	
	
	==> gcp-auth [a6f75fc35eaa] <==
	2024/07/19 03:35:08 GCP Auth Webhook started!
	2024/07/19 03:35:16 Ready to marshal response ...
	2024/07/19 03:35:16 Ready to write response ...
	2024/07/19 03:35:16 Ready to marshal response ...
	2024/07/19 03:35:16 Ready to write response ...
	2024/07/19 03:35:20 Ready to marshal response ...
	2024/07/19 03:35:20 Ready to write response ...
	2024/07/19 03:35:26 Ready to marshal response ...
	2024/07/19 03:35:26 Ready to write response ...
	2024/07/19 03:35:27 Ready to marshal response ...
	2024/07/19 03:35:27 Ready to write response ...
	2024/07/19 03:35:40 Ready to marshal response ...
	2024/07/19 03:35:40 Ready to write response ...
	
	
	==> kernel <==
	 03:36:07 up 7 min,  0 users,  load average: 1.87, 2.31, 1.17
	Linux addons-811100 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [ecf3f877ac4e] <==
	W0719 03:34:06.668298       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.146.127:443: connect: connection refused
	W0719 03:34:07.677041       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.146.127:443: connect: connection refused
	W0719 03:34:08.710060       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.146.127:443: connect: connection refused
	W0719 03:34:09.770346       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.146.127:443: connect: connection refused
	W0719 03:34:10.866465       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.146.127:443: connect: connection refused
	W0719 03:34:11.880505       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.146.127:443: connect: connection refused
	W0719 03:34:12.890524       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.146.127:443: connect: connection refused
	W0719 03:34:13.982494       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.146.127:443: connect: connection refused
	W0719 03:34:15.002279       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.146.127:443: connect: connection refused
	W0719 03:34:16.033693       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.146.127:443: connect: connection refused
	W0719 03:34:17.041021       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.146.127:443: connect: connection refused
	W0719 03:34:18.053856       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.146.127:443: connect: connection refused
	W0719 03:34:19.117469       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.146.127:443: connect: connection refused
	W0719 03:34:20.200392       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.146.127:443: connect: connection refused
	W0719 03:34:21.227381       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.146.127:443: connect: connection refused
	W0719 03:34:22.249754       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.146.127:443: connect: connection refused
	W0719 03:34:23.281703       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.146.127:443: connect: connection refused
	W0719 03:34:32.013999       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.158.184:443: connect: connection refused
	E0719 03:34:32.014057       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.158.184:443: connect: connection refused
	W0719 03:34:50.035916       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.158.184:443: connect: connection refused
	E0719 03:34:50.036322       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.158.184:443: connect: connection refused
	W0719 03:34:50.244082       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.158.184:443: connect: connection refused
	E0719 03:34:50.244130       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.158.184:443: connect: connection refused
	I0719 03:35:26.982273       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0719 03:35:27.090662       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [824d910ccb59] <==
	I0719 03:34:50.285551       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0719 03:34:50.289557       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0719 03:34:50.304642       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0719 03:34:51.659146       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0719 03:34:52.796635       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0719 03:34:53.047218       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0719 03:34:53.859069       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0719 03:34:54.080064       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0719 03:34:54.100900       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0719 03:34:54.124341       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0719 03:34:54.153911       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0719 03:34:54.900451       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0719 03:34:55.165225       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0719 03:34:55.182673       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0719 03:34:55.196082       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0719 03:35:09.358996       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="18.991679ms"
	I0719 03:35:09.360757       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="35.2µs"
	I0719 03:35:24.035912       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0719 03:35:24.212467       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0719 03:35:25.014294       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0719 03:35:25.085147       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0719 03:35:26.381275       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init"
	I0719 03:35:44.634041       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-656c9c8d9c" duration="5.4µs"
	I0719 03:35:54.207232       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-6fcd4f6f98" duration="9.9µs"
	I0719 03:35:58.953570       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-8d985888d" duration="7.1µs"
	
	
	==> kube-proxy [0389aa9fec2f] <==
	I0719 03:31:17.219016       1 server_linux.go:69] "Using iptables proxy"
	I0719 03:31:17.321113       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.164.220"]
	I0719 03:31:17.701209       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 03:31:17.701391       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 03:31:17.701444       1 server_linux.go:165] "Using iptables Proxier"
	I0719 03:31:17.778559       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 03:31:17.779498       1 server.go:872] "Version info" version="v1.30.3"
	I0719 03:31:17.779849       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 03:31:17.794917       1 config.go:192] "Starting service config controller"
	I0719 03:31:17.794952       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 03:31:17.795658       1 config.go:101] "Starting endpoint slice config controller"
	I0719 03:31:17.795695       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 03:31:17.796553       1 config.go:319] "Starting node config controller"
	I0719 03:31:17.796567       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 03:31:17.924008       1 shared_informer.go:320] Caches are synced for node config
	I0719 03:31:17.924089       1 shared_informer.go:320] Caches are synced for service config
	I0719 03:31:17.924166       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f1edaa2e4e4e] <==
	W0719 03:30:48.692318       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 03:30:48.692768       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 03:30:48.692709       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 03:30:48.693633       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 03:30:48.742540       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 03:30:48.742568       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 03:30:48.907154       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 03:30:48.907431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 03:30:49.014808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 03:30:49.015178       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 03:30:49.099946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 03:30:49.100303       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 03:30:49.133414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 03:30:49.133572       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 03:30:49.151197       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 03:30:49.151344       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 03:30:49.158619       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 03:30:49.158702       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 03:30:49.228336       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 03:30:49.228456       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 03:30:49.245655       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 03:30:49.245986       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 03:30:49.264157       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 03:30:49.264344       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0719 03:30:51.463661       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 03:35:46 addons-811100 kubelet[2282]: I0719 03:35:46.505752    2282 scope.go:117] "RemoveContainer" containerID="e686913da167e751a937936ec84e580f6427ff88b69c4dc4c2997a29c44b87ac"
	Jul 19 03:35:46 addons-811100 kubelet[2282]: I0719 03:35:46.848447    2282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ed772e6-0b8d-489f-aa21-910d5b4fa5d9" path="/var/lib/kubelet/pods/0ed772e6-0b8d-489f-aa21-910d5b4fa5d9/volumes"
	Jul 19 03:35:46 addons-811100 kubelet[2282]: I0719 03:35:46.849436    2282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57f54d5d-5da9-44a4-8d26-866c53216fc1" path="/var/lib/kubelet/pods/57f54d5d-5da9-44a4-8d26-866c53216fc1/volumes"
	Jul 19 03:35:50 addons-811100 kubelet[2282]: E0719 03:35:50.894957    2282 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 03:35:50 addons-811100 kubelet[2282]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 03:35:50 addons-811100 kubelet[2282]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 03:35:50 addons-811100 kubelet[2282]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 03:35:50 addons-811100 kubelet[2282]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 03:35:51 addons-811100 kubelet[2282]: I0719 03:35:51.187897    2282 scope.go:117] "RemoveContainer" containerID="6b917f3ad2ce11974eb72fbae31eed0b2f736db9c53eef1360b2ae9ce08061dd"
	Jul 19 03:35:51 addons-811100 kubelet[2282]: I0719 03:35:51.235372    2282 scope.go:117] "RemoveContainer" containerID="b7f2be4e1d90448d21d4d2ede3fe951c741a4fad9fea452f956c181396965942"
	Jul 19 03:35:51 addons-811100 kubelet[2282]: I0719 03:35:51.276256    2282 scope.go:117] "RemoveContainer" containerID="2e085dcc62393891c06bc265b36caf65b767230f1ecd4fb311ddf73ff9d45ded"
	Jul 19 03:35:51 addons-811100 kubelet[2282]: I0719 03:35:51.342378    2282 scope.go:117] "RemoveContainer" containerID="21f90566926dd873546b8da1c45096bf7814b2da678d4df9c7d0b27d2a40a212"
	Jul 19 03:35:51 addons-811100 kubelet[2282]: I0719 03:35:51.396180    2282 scope.go:117] "RemoveContainer" containerID="da6254b74b2a7a190fcf290c6d5385c5fa24fe99a9d31384a6865da273652b3a"
	Jul 19 03:35:51 addons-811100 kubelet[2282]: I0719 03:35:51.444182    2282 scope.go:117] "RemoveContainer" containerID="2b9b1d4daa9d2a62de65f501d4981c4e30ad85047074720890673ee8dd5a9b65"
	Jul 19 03:35:51 addons-811100 kubelet[2282]: I0719 03:35:51.488886    2282 scope.go:117] "RemoveContainer" containerID="40f9953991a0cd1210c1c911762580df026ada5e8166e4691eafef26fe56f12a"
	Jul 19 03:35:54 addons-811100 kubelet[2282]: I0719 03:35:54.707278    2282 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrwjt\" (UniqueName: \"kubernetes.io/projected/2a1f967e-2fed-4a63-8f44-8ad25eff7b86-kube-api-access-nrwjt\") pod \"2a1f967e-2fed-4a63-8f44-8ad25eff7b86\" (UID: \"2a1f967e-2fed-4a63-8f44-8ad25eff7b86\") "
	Jul 19 03:35:54 addons-811100 kubelet[2282]: I0719 03:35:54.710560    2282 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a1f967e-2fed-4a63-8f44-8ad25eff7b86-kube-api-access-nrwjt" (OuterVolumeSpecName: "kube-api-access-nrwjt") pod "2a1f967e-2fed-4a63-8f44-8ad25eff7b86" (UID: "2a1f967e-2fed-4a63-8f44-8ad25eff7b86"). InnerVolumeSpecName "kube-api-access-nrwjt". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 19 03:35:54 addons-811100 kubelet[2282]: I0719 03:35:54.808961    2282 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-nrwjt\" (UniqueName: \"kubernetes.io/projected/2a1f967e-2fed-4a63-8f44-8ad25eff7b86-kube-api-access-nrwjt\") on node \"addons-811100\" DevicePath \"\""
	Jul 19 03:35:54 addons-811100 kubelet[2282]: I0719 03:35:54.861012    2282 scope.go:117] "RemoveContainer" containerID="820a42eaf50197dd752889c7fa1ea02849ea57e56e17540ae70dcd6db6d92d09"
	Jul 19 03:35:54 addons-811100 kubelet[2282]: I0719 03:35:54.936375    2282 scope.go:117] "RemoveContainer" containerID="820a42eaf50197dd752889c7fa1ea02849ea57e56e17540ae70dcd6db6d92d09"
	Jul 19 03:35:54 addons-811100 kubelet[2282]: E0719 03:35:54.937964    2282 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 820a42eaf50197dd752889c7fa1ea02849ea57e56e17540ae70dcd6db6d92d09" containerID="820a42eaf50197dd752889c7fa1ea02849ea57e56e17540ae70dcd6db6d92d09"
	Jul 19 03:35:54 addons-811100 kubelet[2282]: I0719 03:35:54.938007    2282 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"820a42eaf50197dd752889c7fa1ea02849ea57e56e17540ae70dcd6db6d92d09"} err="failed to get container status \"820a42eaf50197dd752889c7fa1ea02849ea57e56e17540ae70dcd6db6d92d09\": rpc error: code = Unknown desc = Error response from daemon: No such container: 820a42eaf50197dd752889c7fa1ea02849ea57e56e17540ae70dcd6db6d92d09"
	Jul 19 03:35:56 addons-811100 kubelet[2282]: I0719 03:35:56.822345    2282 scope.go:117] "RemoveContainer" containerID="cd5b3f569a1dc4bce8fb2a95286b93f8bbfeb2248b83ff98850890f106406069"
	Jul 19 03:35:56 addons-811100 kubelet[2282]: E0719 03:35:56.822883    2282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-bkjvc_gadget(66f2dc77-d2c6-463c-a58e-1acea2e54569)\"" pod="gadget/gadget-bkjvc" podUID="66f2dc77-d2c6-463c-a58e-1acea2e54569"
	Jul 19 03:35:56 addons-811100 kubelet[2282]: I0719 03:35:56.849937    2282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a1f967e-2fed-4a63-8f44-8ad25eff7b86" path="/var/lib/kubelet/pods/2a1f967e-2fed-4a63-8f44-8ad25eff7b86/volumes"
	
	
	==> storage-provisioner [0fdcb56b0bbb] <==
	I0719 03:31:39.219380       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 03:31:39.453366       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 03:31:39.453435       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 03:31:39.514845       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 03:31:39.515337       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-811100_98afbb4b-da3f-44df-b3fd-72e06aef393a!
	I0719 03:31:39.517234       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e676458f-32d0-438f-b320-7f61246c0f25", APIVersion:"v1", ResourceVersion:"716", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-811100_98afbb4b-da3f-44df-b3fd-72e06aef393a became leader
	I0719 03:31:39.628057       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-811100_98afbb4b-da3f-44df-b3fd-72e06aef393a!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:35:58.587226    7556 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-811100 -n addons-811100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-811100 -n addons-811100: (12.8880061s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-811100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: headlamp-7867546754-5svjh ingress-nginx-admission-create-krvbh ingress-nginx-admission-patch-gqp26 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-811100 describe pod headlamp-7867546754-5svjh ingress-nginx-admission-create-krvbh ingress-nginx-admission-patch-gqp26 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-811100 describe pod headlamp-7867546754-5svjh ingress-nginx-admission-create-krvbh ingress-nginx-admission-patch-gqp26 test-job-nginx-0: exit status 1 (510.7814ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "headlamp-7867546754-5svjh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-krvbh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gqp26" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-811100 describe pod headlamp-7867546754-5svjh ingress-nginx-admission-create-krvbh ingress-nginx-admission-patch-gqp26 test-job-nginx-0: exit status 1
--- FAIL: TestAddons/parallel/Registry (72.42s)

                                                
                                    
x
+
TestErrorSpam/setup (198.77s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-907600 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-907600 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 --driver=hyperv: (3m18.7655978s)
error_spam_test.go:96: unexpected stderr: "W0719 03:40:37.497785    5312 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube6\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube VM"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-907600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=19302
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-907600" primary control-plane node in "nospam-907600" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-907600" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0719 03:40:37.497785    5312 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (198.77s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (345.87s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-149600 --alsologtostderr -v=8
E0719 03:50:37.901212    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-149600 --alsologtostderr -v=8: exit status 90 (2m32.5266241s)

                                                
                                                
-- stdout --
	* [functional-149600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "functional-149600" primary control-plane node in "functional-149600" cluster
	* Updating the running hyperv "functional-149600" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:50:29.927764    3696 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0719 03:50:30.014967    3696 out.go:291] Setting OutFile to fd 736 ...
	I0719 03:50:30.015716    3696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:50:30.015716    3696 out.go:304] Setting ErrFile to fd 920...
	I0719 03:50:30.015716    3696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:50:30.039559    3696 out.go:298] Setting JSON to false
	I0719 03:50:30.043125    3696 start.go:129] hostinfo: {"hostname":"minikube6","uptime":20056,"bootTime":1721340973,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0719 03:50:30.043125    3696 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 03:50:30.049078    3696 out.go:177] * [functional-149600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 03:50:30.054622    3696 notify.go:220] Checking for updates...
	I0719 03:50:30.058333    3696 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 03:50:30.061118    3696 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 03:50:30.064121    3696 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0719 03:50:30.066177    3696 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 03:50:30.069184    3696 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 03:50:30.074042    3696 config.go:182] Loaded profile config "functional-149600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 03:50:30.074369    3696 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:50:35.621705    3696 out.go:177] * Using the hyperv driver based on existing profile
	I0719 03:50:35.625671    3696 start.go:297] selected driver: hyperv
	I0719 03:50:35.625671    3696 start.go:901] validating driver "hyperv" against &{Name:functional-149600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-149600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.160.82 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:50:35.625671    3696 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 03:50:35.676160    3696 cni.go:84] Creating CNI manager for ""
	I0719 03:50:35.676274    3696 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 03:50:35.676342    3696 start.go:340] cluster config:
	{Name:functional-149600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-149600 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.160.82 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:50:35.676342    3696 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 03:50:35.683357    3696 out.go:177] * Starting "functional-149600" primary control-plane node in "functional-149600" cluster
	I0719 03:50:35.686075    3696 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 03:50:35.686075    3696 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 03:50:35.686075    3696 cache.go:56] Caching tarball of preloaded images
	I0719 03:50:35.686075    3696 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 03:50:35.686075    3696 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 03:50:35.686926    3696 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-149600\config.json ...
	I0719 03:50:35.688692    3696 start.go:360] acquireMachinesLock for functional-149600: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 03:50:35.688692    3696 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-149600"
	I0719 03:50:35.689690    3696 start.go:96] Skipping create...Using existing machine configuration
	I0719 03:50:35.689690    3696 fix.go:54] fixHost starting: 
	I0719 03:50:35.689690    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:38.581698    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:38.581801    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:38.581801    3696 fix.go:112] recreateIfNeeded on functional-149600: state=Running err=<nil>
	W0719 03:50:38.581801    3696 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 03:50:38.589005    3696 out.go:177] * Updating the running hyperv "functional-149600" VM ...
	I0719 03:50:38.591394    3696 machine.go:94] provisionDockerMachine start ...
	I0719 03:50:38.591394    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:40.863423    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:40.863423    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:40.863553    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:50:43.589830    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:50:43.589830    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:43.596398    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:50:43.597572    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:50:43.597572    3696 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 03:50:43.733324    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-149600
	
	I0719 03:50:43.733461    3696 buildroot.go:166] provisioning hostname "functional-149600"
	I0719 03:50:43.733530    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:46.004354    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:46.004354    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:46.004354    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:50:48.635705    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:50:48.635705    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:48.641943    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:50:48.642699    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:50:48.642699    3696 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-149600 && echo "functional-149600" | sudo tee /etc/hostname
	I0719 03:50:48.808147    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-149600
	
	I0719 03:50:48.808147    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:50.983670    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:50.983670    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:50.983670    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:50:53.564554    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:50:53.564554    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:53.570500    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:50:53.571029    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:50:53.571029    3696 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-149600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-149600/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-149600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 03:50:53.715932    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 03:50:53.715932    3696 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 03:50:53.715932    3696 buildroot.go:174] setting up certificates
	I0719 03:50:53.715932    3696 provision.go:84] configureAuth start
	I0719 03:50:53.716479    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:55.878607    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:55.878827    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:55.878961    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:50:58.506063    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:50:58.506342    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:58.506342    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:00.678396    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:00.678396    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:00.678789    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:03.274493    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:03.275498    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:03.275498    3696 provision.go:143] copyHostCerts
	I0719 03:51:03.275498    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0719 03:51:03.276037    3696 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0719 03:51:03.276037    3696 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0719 03:51:03.276654    3696 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 03:51:03.277651    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0719 03:51:03.278183    3696 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0719 03:51:03.278183    3696 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0719 03:51:03.278428    3696 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 03:51:03.279156    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0719 03:51:03.279712    3696 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0719 03:51:03.279842    3696 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0719 03:51:03.280165    3696 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 03:51:03.281113    3696 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-149600 san=[127.0.0.1 172.28.160.82 functional-149600 localhost minikube]
	I0719 03:51:03.689682    3696 provision.go:177] copyRemoteCerts
	I0719 03:51:03.703822    3696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 03:51:03.703822    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:05.944447    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:05.945222    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:05.945222    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:08.655742    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:08.656027    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:08.656027    3696 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 03:51:08.767037    3696 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0631549s)
	I0719 03:51:08.767037    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0719 03:51:08.767037    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 03:51:08.817664    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0719 03:51:08.817664    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 03:51:08.866416    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0719 03:51:08.866625    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 03:51:08.914316    3696 provision.go:87] duration metric: took 15.1982045s to configureAuth
	I0719 03:51:08.914388    3696 buildroot.go:189] setting minikube options for container-runtime
	I0719 03:51:08.914388    3696 config.go:182] Loaded profile config "functional-149600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 03:51:08.915029    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:11.135055    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:11.135661    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:11.135851    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:13.741166    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:13.741166    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:13.746157    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:13.746776    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:13.746776    3696 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 03:51:13.880918    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 03:51:13.880918    3696 buildroot.go:70] root file system type: tmpfs
	I0719 03:51:13.881582    3696 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 03:51:13.881732    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:16.077328    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:16.077328    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:16.078246    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:18.691853    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:18.691853    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:18.698444    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:18.698985    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:18.699205    3696 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 03:51:18.866085    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 03:51:18.866196    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:21.047452    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:21.047757    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:21.047885    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:23.635931    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:23.635931    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:23.641636    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:23.641913    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:23.641913    3696 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 03:51:23.783486    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 03:51:23.783486    3696 machine.go:97] duration metric: took 45.1915583s to provisionDockerMachine
	I0719 03:51:23.783595    3696 start.go:293] postStartSetup for "functional-149600" (driver="hyperv")
	I0719 03:51:23.783595    3696 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 03:51:23.796656    3696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 03:51:23.796656    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:25.981376    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:25.981376    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:25.981376    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:28.598484    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:28.598484    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:28.598544    3696 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 03:51:28.705771    3696 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9090569s)
	I0719 03:51:28.718613    3696 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 03:51:28.725582    3696 command_runner.go:130] > NAME=Buildroot
	I0719 03:51:28.725582    3696 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 03:51:28.725582    3696 command_runner.go:130] > ID=buildroot
	I0719 03:51:28.725582    3696 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 03:51:28.725582    3696 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 03:51:28.725959    3696 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 03:51:28.725959    3696 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 03:51:28.725959    3696 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 03:51:28.727557    3696 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
	I0719 03:51:28.727636    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /etc/ssl/certs/96042.pem
	I0719 03:51:28.728845    3696 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts -> hosts in /etc/test/nested/copy/9604
	I0719 03:51:28.728930    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts -> /etc/test/nested/copy/9604/hosts
	I0719 03:51:28.739078    3696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9604
	I0719 03:51:28.760168    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
	I0719 03:51:28.810185    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts --> /etc/test/nested/copy/9604/hosts (40 bytes)
	I0719 03:51:28.855606    3696 start.go:296] duration metric: took 5.0719507s for postStartSetup
	I0719 03:51:28.855606    3696 fix.go:56] duration metric: took 53.165288s for fixHost
	I0719 03:51:28.855606    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:31.033424    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:31.033992    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:31.033992    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:33.661469    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:33.661469    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:33.666391    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:33.667164    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:33.667164    3696 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0719 03:51:33.803547    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721361093.813306008
	
	I0719 03:51:33.803653    3696 fix.go:216] guest clock: 1721361093.813306008
	I0719 03:51:33.803653    3696 fix.go:229] Guest: 2024-07-19 03:51:33.813306008 +0000 UTC Remote: 2024-07-19 03:51:28.8556061 +0000 UTC m=+59.006897101 (delta=4.957699908s)
	I0719 03:51:33.803796    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:35.994681    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:35.995703    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:35.995726    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:38.620465    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:38.620535    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:38.625233    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:38.625457    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:38.625457    3696 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721361093
	I0719 03:51:38.774641    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 03:51:33 UTC 2024
	
	I0719 03:51:38.774641    3696 fix.go:236] clock set: Fri Jul 19 03:51:33 UTC 2024
	 (err=<nil>)
	I0719 03:51:38.774738    3696 start.go:83] releasing machines lock for "functional-149600", held for 1m3.0853019s
	I0719 03:51:38.774962    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:40.997351    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:40.997570    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:40.997570    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:43.631283    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:43.631283    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:43.635455    3696 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 03:51:43.635455    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:43.646136    3696 ssh_runner.go:195] Run: cat /version.json
	I0719 03:51:43.646827    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:45.897790    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:45.898865    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:45.898924    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:45.905632    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:45.905632    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:45.906182    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:48.659880    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:48.659880    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:48.661005    3696 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 03:51:48.685815    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:48.685890    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:48.685890    3696 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 03:51:48.760223    3696 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0719 03:51:48.760223    3696 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1247074s)
	W0719 03:51:48.760467    3696 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 03:51:48.777389    3696 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0719 03:51:48.778342    3696 ssh_runner.go:235] Completed: cat /version.json: (5.1321451s)
	I0719 03:51:48.790179    3696 ssh_runner.go:195] Run: systemctl --version
	I0719 03:51:48.799025    3696 command_runner.go:130] > systemd 252 (252)
	I0719 03:51:48.799025    3696 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0719 03:51:48.809673    3696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 03:51:48.817402    3696 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0719 03:51:48.818131    3696 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 03:51:48.831435    3696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 03:51:48.850859    3696 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 03:51:48.850859    3696 start.go:495] detecting cgroup driver to use...
	I0719 03:51:48.851103    3696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0719 03:51:48.877177    3696 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 03:51:48.877177    3696 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 03:51:48.893340    3696 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0719 03:51:48.904541    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 03:51:48.935991    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 03:51:48.954279    3696 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 03:51:48.967927    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 03:51:48.997865    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 03:51:49.026438    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 03:51:49.072524    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 03:51:49.117543    3696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 03:51:49.154251    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 03:51:49.188018    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 03:51:49.222803    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 03:51:49.261427    3696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 03:51:49.282134    3696 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 03:51:49.294367    3696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 03:51:49.330587    3696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:51:49.594056    3696 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 03:51:49.632649    3696 start.go:495] detecting cgroup driver to use...
	I0719 03:51:49.645484    3696 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 03:51:49.668125    3696 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0719 03:51:49.668402    3696 command_runner.go:130] > [Unit]
	I0719 03:51:49.668402    3696 command_runner.go:130] > Description=Docker Application Container Engine
	I0719 03:51:49.668402    3696 command_runner.go:130] > Documentation=https://docs.docker.com
	I0719 03:51:49.668402    3696 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0719 03:51:49.668497    3696 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0719 03:51:49.668497    3696 command_runner.go:130] > StartLimitBurst=3
	I0719 03:51:49.668497    3696 command_runner.go:130] > StartLimitIntervalSec=60
	I0719 03:51:49.668497    3696 command_runner.go:130] > [Service]
	I0719 03:51:49.668497    3696 command_runner.go:130] > Type=notify
	I0719 03:51:49.668497    3696 command_runner.go:130] > Restart=on-failure
	I0719 03:51:49.668497    3696 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0719 03:51:49.668497    3696 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0719 03:51:49.668497    3696 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0719 03:51:49.668497    3696 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0719 03:51:49.668497    3696 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0719 03:51:49.668497    3696 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0719 03:51:49.668497    3696 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0719 03:51:49.668497    3696 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0719 03:51:49.668497    3696 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0719 03:51:49.668497    3696 command_runner.go:130] > ExecStart=
	I0719 03:51:49.668497    3696 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0719 03:51:49.668497    3696 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0719 03:51:49.668497    3696 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0719 03:51:49.668497    3696 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0719 03:51:49.668497    3696 command_runner.go:130] > LimitNOFILE=infinity
	I0719 03:51:49.668497    3696 command_runner.go:130] > LimitNPROC=infinity
	I0719 03:51:49.668497    3696 command_runner.go:130] > LimitCORE=infinity
	I0719 03:51:49.668497    3696 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0719 03:51:49.668497    3696 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0719 03:51:49.668497    3696 command_runner.go:130] > TasksMax=infinity
	I0719 03:51:49.668497    3696 command_runner.go:130] > TimeoutStartSec=0
	I0719 03:51:49.668497    3696 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0719 03:51:49.668497    3696 command_runner.go:130] > Delegate=yes
	I0719 03:51:49.669031    3696 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0719 03:51:49.669031    3696 command_runner.go:130] > KillMode=process
	I0719 03:51:49.669031    3696 command_runner.go:130] > [Install]
	I0719 03:51:49.669031    3696 command_runner.go:130] > WantedBy=multi-user.target
	I0719 03:51:49.680959    3696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 03:51:49.714100    3696 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 03:51:49.772216    3696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 03:51:49.806868    3696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 03:51:49.828840    3696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 03:51:49.861009    3696 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0719 03:51:49.874179    3696 ssh_runner.go:195] Run: which cri-dockerd
	I0719 03:51:49.879587    3696 command_runner.go:130] > /usr/bin/cri-dockerd
	I0719 03:51:49.890138    3696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 03:51:49.907472    3696 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 03:51:49.956150    3696 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 03:51:50.235400    3696 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 03:51:50.503397    3696 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 03:51:50.503594    3696 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 03:51:50.548434    3696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:51:50.826918    3696 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 03:53:02.223808    3696 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0719 03:53:02.223949    3696 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0719 03:53:02.224012    3696 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3961883s)
	I0719 03:53:02.236953    3696 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0719 03:53:02.270395    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	I0719 03:53:02.270395    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.531914350Z" level=info msg="Starting up"
	I0719 03:53:02.270707    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.534422132Z" level=info msg="containerd not running, starting managed containerd"
	I0719 03:53:02.270707    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.535803677Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=684
	I0719 03:53:02.270775    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.567717825Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0719 03:53:02.270775    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594617108Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0719 03:53:02.270775    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594655809Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0719 03:53:02.270913    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594718511Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0719 03:53:02.270913    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594736112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.270913    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594817914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.270991    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595026521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.270991    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595269429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.271066    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595407134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271066    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595431535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.271066    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595445135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271174    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595540038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271174    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595881749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271283    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598812246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.271283    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598917149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271348    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599162457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.271348    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599284261Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0719 03:53:02.271420    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599462867Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0719 03:53:02.271420    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599605372Z" level=info msg="metadata content store policy set" policy=shared
	I0719 03:53:02.271420    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625338316Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0719 03:53:02.271510    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625549423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0719 03:53:02.271510    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625577124Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0719 03:53:02.271510    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625596425Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0719 03:53:02.271510    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625614725Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0719 03:53:02.271580    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625734329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0719 03:53:02.271580    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626111642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.271580    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626552556Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.271651    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626708661Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0719 03:53:02.271651    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626731962Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0719 03:53:02.271722    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626749163Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271722    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626764763Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271722    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626779864Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271793    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626807165Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271793    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626826665Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271793    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626842566Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271879    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626857266Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271879    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626871767Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271983    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626901168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.271983    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626925168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.271983    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626942469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626958269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626972470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626986970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627018171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627050773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627067473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272189    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627087974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272189    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627102874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272189    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627118075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272189    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627133475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272278    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627151576Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0719 03:53:02.272278    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627179977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272278    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627207478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272385    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627223378Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0719 03:53:02.272385    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627308681Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0719 03:53:02.272385    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627497987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0719 03:53:02.272456    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627603491Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0719 03:53:02.272456    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627628491Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0719 03:53:02.272456    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627642192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272527    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627659693Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0719 03:53:02.272527    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627677693Z" level=info msg="NRI interface is disabled by configuration."
	I0719 03:53:02.272527    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628139708Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0719 03:53:02.272598    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628464419Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0719 03:53:02.272598    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628586223Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0719 03:53:02.272598    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628648825Z" level=info msg="containerd successfully booted in 0.062295s"
	I0719 03:53:02.272668    3696 command_runner.go:130] > Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.605880874Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0719 03:53:02.272668    3696 command_runner.go:130] > Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.640734047Z" level=info msg="Loading containers: start."
	I0719 03:53:02.272741    3696 command_runner.go:130] > Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.813575066Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0719 03:53:02.272741    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.031273218Z" level=info msg="Loading containers: done."
	I0719 03:53:02.272741    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052569890Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052711603Z" level=info msg="Daemon has completed initialization"
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174428772Z" level=info msg="API listen on /var/run/docker.sock"
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 systemd[1]: Started Docker Application Container Engine.
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174659093Z" level=info msg="API listen on [::]:2376"
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	I0719 03:53:02.272906    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.327916124Z" level=info msg="Processing signal 'terminated'"
	I0719 03:53:02.272906    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.330803748Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0719 03:53:02.272906    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332114659Z" level=info msg="Daemon shutdown complete"
	I0719 03:53:02.272978    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332413462Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0719 03:53:02.273055    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332761765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0719 03:53:02.273055    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	I0719 03:53:02.273055    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	I0719 03:53:02.273055    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	I0719 03:53:02.273128    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.396667332Z" level=info msg="Starting up"
	I0719 03:53:02.273128    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.397798042Z" level=info msg="containerd not running, starting managed containerd"
	I0719 03:53:02.273201    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.402462181Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1096
	I0719 03:53:02.273201    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.432470534Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0719 03:53:02.273273    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459514962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0719 03:53:02.273273    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459615563Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0719 03:53:02.273273    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459667563Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0719 03:53:02.273345    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459682563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273345    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459912965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273418    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459936465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273418    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460088967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273418    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460343269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273495    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460374469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273495    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460396969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273495    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460425770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273569    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460819273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273569    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463853798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273642    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463983400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273642    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464200501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464295002Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464331702Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464352803Z" level=info msg="metadata content store policy set" policy=shared
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464795906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464850207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464884207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0719 03:53:02.273929    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464929707Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0719 03:53:02.273929    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464948008Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0719 03:53:02.273929    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465078809Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0719 03:53:02.273929    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465467012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.274022    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465770315Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.274055    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465863515Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465884716Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465898416Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465911416Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465922816Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465936016Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465964116Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465979716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465991216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466002317Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466032417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466048417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466060817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466073817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466093917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466108217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466120718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466132618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466145718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466159818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466170818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466182018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466193718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466207818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466226918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466239919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466250719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466362320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466382920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466470120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0719 03:53:02.274821    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466490821Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0719 03:53:02.274821    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466502121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274898    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466521321Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0719 03:53:02.274898    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466787523Z" level=info msg="NRI interface is disabled by configuration."
	I0719 03:53:02.274898    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467170726Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0719 03:53:02.274898    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467422729Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0719 03:53:02.274989    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467502129Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0719 03:53:02.274989    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467596330Z" level=info msg="containerd successfully booted in 0.035978s"
	I0719 03:53:02.274989    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.446816884Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0719 03:53:02.275065    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.479266357Z" level=info msg="Loading containers: start."
	I0719 03:53:02.275065    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.611087768Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0719 03:53:02.275139    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.727699751Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0719 03:53:02.275139    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.826817487Z" level=info msg="Loading containers: done."
	I0719 03:53:02.275139    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851788197Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0719 03:53:02.275139    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851961999Z" level=info msg="Daemon has completed initialization"
	I0719 03:53:02.275211    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902179022Z" level=info msg="API listen on /var/run/docker.sock"
	I0719 03:53:02.275211    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 systemd[1]: Started Docker Application Container Engine.
	I0719 03:53:02.275211    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902385724Z" level=info msg="API listen on [::]:2376"
	I0719 03:53:02.275286    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.464303420Z" level=info msg="Processing signal 'terminated'"
	I0719 03:53:02.275286    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466178836Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0719 03:53:02.275286    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466444238Z" level=info msg="Daemon shutdown complete"
	I0719 03:53:02.275358    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466617340Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0719 03:53:02.275358    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466645140Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0719 03:53:02.275358    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	I0719 03:53:02.275430    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	I0719 03:53:02.275430    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	I0719 03:53:02.275430    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	I0719 03:53:02.275557    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.526170370Z" level=info msg="Starting up"
	I0719 03:53:02.275580    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.527875185Z" level=info msg="containerd not running, starting managed containerd"
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.529085595Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1449
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.561806771Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.588986100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589119201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589175201Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589189602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589217102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589231002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589372603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589466304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589487004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589498304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589522204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589693506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.592940233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593046334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593164935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593288836Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593325336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593343737Z" level=info msg="metadata content store policy set" policy=shared
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593655039Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593711040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593798040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593840041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593855841Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593915841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594246644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594583947Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594609647Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594625347Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594640447Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276335    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594659648Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276335    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594674648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594689748Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594715248Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594831949Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594864649Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594894750Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594912750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594927150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594938550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594949850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594961050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594988850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594999351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595010451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595022151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595034451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595044251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595054151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595064451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595080551Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595100051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595112651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595122752Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595253153Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595360854Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595377754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595405554Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595414854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595426254Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595435454Z" level=info msg="NRI interface is disabled by configuration."
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595711057Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595836558Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595937958Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595991659Z" level=info msg="containerd successfully booted in 0.035148s"
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:45 functional-149600 dockerd[1443]: time="2024-07-19T03:49:45.571450281Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0719 03:53:02.277129    3696 command_runner.go:130] > Jul 19 03:49:48 functional-149600 dockerd[1443]: time="2024-07-19T03:49:48.883728000Z" level=info msg="Loading containers: start."
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.006401134Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.127192752Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.218929925Z" level=info msg="Loading containers: done."
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249486583Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249683984Z" level=info msg="Daemon has completed initialization"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299922608Z" level=info msg="API listen on /var/run/docker.sock"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 systemd[1]: Started Docker Application Container Engine.
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299991408Z" level=info msg="API listen on [::]:2376"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812314634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812468840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.813783594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.814181811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.823808405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826750026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826767127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826866331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899025089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899127893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899277199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899669815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918254477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918562790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.920597373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.921124695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387701734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387801838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387829539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387963045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436646441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436931752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437090859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437275166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539345743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539671255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.540445185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.541481126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550468276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550879792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.551210305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.555850986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.238972986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239834399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239916700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.240127804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589855933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589966535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589987335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.590436642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002502056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278448    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002639758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278487    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002654558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278487    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.003059965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278487    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053221935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053490639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053805144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.054875960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794781741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794871142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794886242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794980442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806139221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806918426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807029827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807551631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.625163713Z" level=info msg="ignoring event" container=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.629961233Z" level=info msg="shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631114094Z" level=warning msg="cleaning up after shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631402359Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.674442159Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:50:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.838943371Z" level=info msg="ignoring event" container=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.839886257Z" level=info msg="shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840028839Z" level=warning msg="cleaning up after shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840046637Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303237678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303415569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.304773802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.305273178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593684961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593784156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593803755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.279165    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593913350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:50 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:50 functional-149600 dockerd[1443]: time="2024-07-19T03:51:50.861615472Z" level=info msg="Processing signal 'terminated'"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079285636Z" level=info msg="shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079436436Z" level=warning msg="cleaning up after shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079453436Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.080991335Z" level=info msg="ignoring event" container=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.090996234Z" level=info msg="ignoring event" container=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091838634Z" level=info msg="shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091953834Z" level=warning msg="cleaning up after shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091968234Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.112734230Z" level=info msg="ignoring event" container=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116127330Z" level=info msg="shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116200030Z" level=warning msg="cleaning up after shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116210930Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116537230Z" level=info msg="shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116585530Z" level=warning msg="cleaning up after shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116614030Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116823530Z" level=info msg="ignoring event" container=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116849530Z" level=info msg="ignoring event" container=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116946330Z" level=info msg="ignoring event" container=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116988930Z" level=info msg="ignoring event" container=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122714429Z" level=info msg="shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122848129Z" level=warning msg="cleaning up after shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122861729Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128254128Z" level=info msg="shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128388728Z" level=warning msg="cleaning up after shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128443128Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131550327Z" level=info msg="shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131620327Z" level=warning msg="cleaning up after shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131665527Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148015624Z" level=info msg="shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148155124Z" level=warning msg="cleaning up after shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148209624Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182402919Z" level=info msg="shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182503119Z" level=warning msg="cleaning up after shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182514319Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183465819Z" level=info msg="shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183548319Z" level=warning msg="cleaning up after shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183560019Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185575018Z" level=info msg="ignoring event" container=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185722318Z" level=info msg="ignoring event" container=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185758518Z" level=info msg="ignoring event" container=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185811118Z" level=info msg="ignoring event" container=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185852418Z" level=info msg="ignoring event" container=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186041918Z" level=info msg="shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186095318Z" level=warning msg="cleaning up after shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186139118Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187552418Z" level=info msg="shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187672518Z" level=warning msg="cleaning up after shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187687018Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987746429Z" level=info msg="shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987797629Z" level=warning msg="cleaning up after shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987859329Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:55 functional-149600 dockerd[1443]: time="2024-07-19T03:51:55.988258129Z" level=info msg="ignoring event" container=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:56 functional-149600 dockerd[1449]: time="2024-07-19T03:51:56.011512525Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:51:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.013086308Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.070091705Z" level=info msg="ignoring event" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.070778533Z" level=info msg="shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071068387Z" level=warning msg="cleaning up after shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071124597Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.147257850Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148746827Z" level=info msg="Daemon shutdown complete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148999274Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.149087991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Consumed 5.480s CPU time.
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 dockerd[4315]: time="2024-07-19T03:52:02.207309394Z" level=info msg="Starting up"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:53:02 functional-149600 dockerd[4315]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:53:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	I0719 03:53:02.309205    3696 out.go:177] 
	W0719 03:53:02.311207    3696 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 03:48:58 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.531914350Z" level=info msg="Starting up"
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.534422132Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.535803677Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=684
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.567717825Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594617108Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594655809Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594718511Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594736112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594817914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595026521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595269429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595407134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595431535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595445135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595540038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595881749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598812246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598917149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599162457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599284261Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599462867Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599605372Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625338316Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625549423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625577124Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625596425Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625614725Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625734329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626111642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626552556Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626708661Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626731962Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626749163Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626764763Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626779864Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626807165Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626826665Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626842566Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626857266Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626871767Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626901168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626925168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626942469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626958269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626972470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626986970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627018171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627050773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627067473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627087974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627102874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627118075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627133475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627151576Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627179977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627207478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627223378Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627308681Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627497987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627603491Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627628491Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627642192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627659693Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627677693Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628139708Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628464419Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628586223Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628648825Z" level=info msg="containerd successfully booted in 0.062295s"
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.605880874Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.640734047Z" level=info msg="Loading containers: start."
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.813575066Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.031273218Z" level=info msg="Loading containers: done."
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052569890Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052711603Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174428772Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:00 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174659093Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:31 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.327916124Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.330803748Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332114659Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332413462Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332761765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:49:32 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:49:32 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:49:32 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.396667332Z" level=info msg="Starting up"
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.397798042Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.402462181Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1096
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.432470534Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459514962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459615563Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459667563Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459682563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459912965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459936465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460088967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460343269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460374469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460396969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460425770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460819273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463853798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463983400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464200501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464295002Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464331702Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464352803Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464795906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464850207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464884207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464929707Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464948008Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465078809Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465467012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465770315Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465863515Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465884716Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465898416Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465911416Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465922816Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465936016Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465964116Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465979716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465991216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466002317Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466032417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466048417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466060817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466073817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466093917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466108217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466120718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466132618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466145718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466159818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466170818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466182018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466193718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466207818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466226918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466239919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466250719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466362320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466382920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466470120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466490821Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466502121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466521321Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466787523Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467170726Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467422729Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467502129Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467596330Z" level=info msg="containerd successfully booted in 0.035978s"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.446816884Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.479266357Z" level=info msg="Loading containers: start."
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.611087768Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.727699751Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.826817487Z" level=info msg="Loading containers: done."
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851788197Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851961999Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902179022Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:33 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902385724Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.464303420Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466178836Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466444238Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466617340Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466645140Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:49:43 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:49:44 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:49:44 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:49:44 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.526170370Z" level=info msg="Starting up"
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.527875185Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.529085595Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1449
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.561806771Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.588986100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589119201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589175201Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589189602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589217102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589231002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589372603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589466304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589487004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589498304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589522204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589693506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.592940233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593046334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593164935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593288836Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593325336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593343737Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593655039Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593711040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593798040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593840041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593855841Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593915841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594246644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594583947Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594609647Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594625347Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594640447Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594659648Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594674648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594689748Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594715248Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594831949Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594864649Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594894750Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594912750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594927150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594938550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594949850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594961050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594988850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594999351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595010451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595022151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595034451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595044251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595054151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595064451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595080551Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595100051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595112651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595122752Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595253153Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595360854Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595377754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595405554Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595414854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595426254Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595435454Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595711057Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595836558Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595937958Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595991659Z" level=info msg="containerd successfully booted in 0.035148s"
	Jul 19 03:49:45 functional-149600 dockerd[1443]: time="2024-07-19T03:49:45.571450281Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:49:48 functional-149600 dockerd[1443]: time="2024-07-19T03:49:48.883728000Z" level=info msg="Loading containers: start."
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.006401134Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.127192752Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.218929925Z" level=info msg="Loading containers: done."
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249486583Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249683984Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299922608Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:49 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299991408Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812314634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812468840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.813783594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.814181811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.823808405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826750026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826767127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826866331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899025089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899127893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899277199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899669815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918254477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918562790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.920597373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.921124695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387701734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387801838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387829539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387963045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436646441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436931752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437090859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437275166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539345743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539671255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.540445185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.541481126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550468276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550879792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.551210305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.555850986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.238972986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239834399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239916700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.240127804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589855933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589966535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589987335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.590436642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002502056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002639758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002654558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.003059965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053221935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053490639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053805144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.054875960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794781741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794871142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794886242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794980442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806139221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806918426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807029827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807551631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.625163713Z" level=info msg="ignoring event" container=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.629961233Z" level=info msg="shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631114094Z" level=warning msg="cleaning up after shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631402359Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.674442159Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:50:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.838943371Z" level=info msg="ignoring event" container=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.839886257Z" level=info msg="shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840028839Z" level=warning msg="cleaning up after shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840046637Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303237678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303415569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.304773802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.305273178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593684961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593784156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593803755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593913350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:51:50 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:51:50 functional-149600 dockerd[1443]: time="2024-07-19T03:51:50.861615472Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079285636Z" level=info msg="shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079436436Z" level=warning msg="cleaning up after shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079453436Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.080991335Z" level=info msg="ignoring event" container=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.090996234Z" level=info msg="ignoring event" container=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091838634Z" level=info msg="shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091953834Z" level=warning msg="cleaning up after shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091968234Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.112734230Z" level=info msg="ignoring event" container=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116127330Z" level=info msg="shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116200030Z" level=warning msg="cleaning up after shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116210930Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116537230Z" level=info msg="shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116585530Z" level=warning msg="cleaning up after shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116614030Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116823530Z" level=info msg="ignoring event" container=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116849530Z" level=info msg="ignoring event" container=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116946330Z" level=info msg="ignoring event" container=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116988930Z" level=info msg="ignoring event" container=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122714429Z" level=info msg="shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122848129Z" level=warning msg="cleaning up after shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122861729Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128254128Z" level=info msg="shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128388728Z" level=warning msg="cleaning up after shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128443128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131550327Z" level=info msg="shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131620327Z" level=warning msg="cleaning up after shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131665527Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148015624Z" level=info msg="shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148155124Z" level=warning msg="cleaning up after shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148209624Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182402919Z" level=info msg="shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182503119Z" level=warning msg="cleaning up after shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182514319Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183465819Z" level=info msg="shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183548319Z" level=warning msg="cleaning up after shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183560019Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185575018Z" level=info msg="ignoring event" container=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185722318Z" level=info msg="ignoring event" container=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185758518Z" level=info msg="ignoring event" container=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185811118Z" level=info msg="ignoring event" container=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185852418Z" level=info msg="ignoring event" container=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186041918Z" level=info msg="shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186095318Z" level=warning msg="cleaning up after shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186139118Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187552418Z" level=info msg="shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187672518Z" level=warning msg="cleaning up after shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187687018Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987746429Z" level=info msg="shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987797629Z" level=warning msg="cleaning up after shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987859329Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1443]: time="2024-07-19T03:51:55.988258129Z" level=info msg="ignoring event" container=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:56 functional-149600 dockerd[1449]: time="2024-07-19T03:51:56.011512525Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:51:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.013086308Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.070091705Z" level=info msg="ignoring event" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.070778533Z" level=info msg="shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071068387Z" level=warning msg="cleaning up after shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071124597Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.147257850Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148746827Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148999274Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.149087991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:52:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Consumed 5.480s CPU time.
	Jul 19 03:52:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:52:02 functional-149600 dockerd[4315]: time="2024-07-19T03:52:02.207309394Z" level=info msg="Starting up"
	Jul 19 03:53:02 functional-149600 dockerd[4315]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:53:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 03:48:58 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.531914350Z" level=info msg="Starting up"
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.534422132Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.535803677Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=684
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.567717825Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594617108Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594655809Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594718511Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594736112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594817914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595026521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595269429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595407134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595431535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595445135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595540038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595881749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598812246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598917149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599162457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599284261Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599462867Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599605372Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625338316Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625549423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625577124Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625596425Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625614725Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625734329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626111642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626552556Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626708661Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626731962Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626749163Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626764763Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626779864Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626807165Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626826665Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626842566Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626857266Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626871767Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626901168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626925168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626942469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626958269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626972470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626986970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627018171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627050773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627067473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627087974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627102874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627118075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627133475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627151576Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627179977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627207478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627223378Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627308681Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627497987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627603491Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627628491Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627642192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627659693Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627677693Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628139708Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628464419Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628586223Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628648825Z" level=info msg="containerd successfully booted in 0.062295s"
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.605880874Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.640734047Z" level=info msg="Loading containers: start."
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.813575066Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.031273218Z" level=info msg="Loading containers: done."
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052569890Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052711603Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174428772Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:00 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174659093Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:31 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.327916124Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.330803748Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332114659Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332413462Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332761765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:49:32 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:49:32 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:49:32 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.396667332Z" level=info msg="Starting up"
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.397798042Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.402462181Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1096
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.432470534Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459514962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459615563Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459667563Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459682563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459912965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459936465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460088967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460343269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460374469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460396969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460425770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460819273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463853798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463983400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464200501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464295002Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464331702Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464352803Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464795906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464850207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464884207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464929707Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464948008Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465078809Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465467012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465770315Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465863515Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465884716Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465898416Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465911416Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465922816Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465936016Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465964116Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465979716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465991216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466002317Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466032417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466048417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466060817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466073817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466093917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466108217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466120718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466132618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466145718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466159818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466170818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466182018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466193718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466207818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466226918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466239919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466250719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466362320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466382920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466470120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466490821Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466502121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466521321Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466787523Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467170726Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467422729Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467502129Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467596330Z" level=info msg="containerd successfully booted in 0.035978s"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.446816884Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.479266357Z" level=info msg="Loading containers: start."
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.611087768Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.727699751Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.826817487Z" level=info msg="Loading containers: done."
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851788197Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851961999Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902179022Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:33 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902385724Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.464303420Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466178836Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466444238Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466617340Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466645140Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:49:43 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:49:44 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:49:44 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:49:44 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.526170370Z" level=info msg="Starting up"
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.527875185Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.529085595Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1449
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.561806771Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.588986100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589119201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589175201Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589189602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589217102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589231002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589372603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589466304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589487004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589498304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589522204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589693506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.592940233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593046334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593164935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593288836Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593325336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593343737Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593655039Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593711040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593798040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593840041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593855841Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593915841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594246644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594583947Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594609647Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594625347Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594640447Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594659648Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594674648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594689748Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594715248Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594831949Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594864649Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594894750Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594912750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594927150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594938550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594949850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594961050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594988850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594999351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595010451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595022151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595034451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595044251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595054151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595064451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595080551Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595100051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595112651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595122752Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595253153Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595360854Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595377754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595405554Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595414854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595426254Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595435454Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595711057Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595836558Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595937958Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595991659Z" level=info msg="containerd successfully booted in 0.035148s"
	Jul 19 03:49:45 functional-149600 dockerd[1443]: time="2024-07-19T03:49:45.571450281Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:49:48 functional-149600 dockerd[1443]: time="2024-07-19T03:49:48.883728000Z" level=info msg="Loading containers: start."
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.006401134Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.127192752Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.218929925Z" level=info msg="Loading containers: done."
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249486583Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249683984Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299922608Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:49 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299991408Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812314634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812468840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.813783594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.814181811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.823808405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826750026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826767127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826866331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899025089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899127893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899277199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899669815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918254477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918562790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.920597373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.921124695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387701734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387801838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387829539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387963045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436646441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436931752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437090859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437275166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539345743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539671255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.540445185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.541481126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550468276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550879792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.551210305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.555850986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.238972986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239834399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239916700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.240127804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589855933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589966535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589987335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.590436642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002502056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002639758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002654558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.003059965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053221935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053490639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053805144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.054875960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794781741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794871142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794886242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794980442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806139221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806918426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807029827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807551631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.625163713Z" level=info msg="ignoring event" container=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.629961233Z" level=info msg="shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631114094Z" level=warning msg="cleaning up after shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631402359Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.674442159Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:50:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.838943371Z" level=info msg="ignoring event" container=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.839886257Z" level=info msg="shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840028839Z" level=warning msg="cleaning up after shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840046637Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303237678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303415569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.304773802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.305273178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593684961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593784156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593803755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593913350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:51:50 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:51:50 functional-149600 dockerd[1443]: time="2024-07-19T03:51:50.861615472Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079285636Z" level=info msg="shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079436436Z" level=warning msg="cleaning up after shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079453436Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.080991335Z" level=info msg="ignoring event" container=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.090996234Z" level=info msg="ignoring event" container=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091838634Z" level=info msg="shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091953834Z" level=warning msg="cleaning up after shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091968234Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.112734230Z" level=info msg="ignoring event" container=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116127330Z" level=info msg="shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116200030Z" level=warning msg="cleaning up after shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116210930Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116537230Z" level=info msg="shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116585530Z" level=warning msg="cleaning up after shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116614030Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116823530Z" level=info msg="ignoring event" container=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116849530Z" level=info msg="ignoring event" container=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116946330Z" level=info msg="ignoring event" container=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116988930Z" level=info msg="ignoring event" container=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122714429Z" level=info msg="shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122848129Z" level=warning msg="cleaning up after shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122861729Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128254128Z" level=info msg="shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128388728Z" level=warning msg="cleaning up after shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128443128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131550327Z" level=info msg="shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131620327Z" level=warning msg="cleaning up after shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131665527Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148015624Z" level=info msg="shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148155124Z" level=warning msg="cleaning up after shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148209624Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182402919Z" level=info msg="shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182503119Z" level=warning msg="cleaning up after shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182514319Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183465819Z" level=info msg="shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183548319Z" level=warning msg="cleaning up after shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183560019Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185575018Z" level=info msg="ignoring event" container=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185722318Z" level=info msg="ignoring event" container=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185758518Z" level=info msg="ignoring event" container=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185811118Z" level=info msg="ignoring event" container=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185852418Z" level=info msg="ignoring event" container=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186041918Z" level=info msg="shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186095318Z" level=warning msg="cleaning up after shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186139118Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187552418Z" level=info msg="shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187672518Z" level=warning msg="cleaning up after shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187687018Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987746429Z" level=info msg="shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987797629Z" level=warning msg="cleaning up after shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987859329Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1443]: time="2024-07-19T03:51:55.988258129Z" level=info msg="ignoring event" container=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:56 functional-149600 dockerd[1449]: time="2024-07-19T03:51:56.011512525Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:51:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.013086308Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.070091705Z" level=info msg="ignoring event" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.070778533Z" level=info msg="shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071068387Z" level=warning msg="cleaning up after shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071124597Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.147257850Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148746827Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148999274Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.149087991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:52:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Consumed 5.480s CPU time.
	Jul 19 03:52:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:52:02 functional-149600 dockerd[4315]: time="2024-07-19T03:52:02.207309394Z" level=info msg="Starting up"
	Jul 19 03:53:02 functional-149600 dockerd[4315]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:53:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0719 03:53:02.313026    3696 out.go:239] * 
	* 
	W0719 03:53:02.314501    3696 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 03:53:02.320481    3696 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-windows-amd64.exe start -p functional-149600 --alsologtostderr -v=8": exit status 90
functional_test.go:659: soft start took 2m33.0555877s for "functional-149600" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-149600 -n functional-149600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-149600 -n functional-149600: exit status 2 (12.2454119s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:53:03.009660    5608 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/SoftStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-149600 logs -n 25
E0719 03:55:10.074256    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-149600 logs -n 25: (2m48.0109069s)
helpers_test.go:252: TestFunctional/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                 Args                                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| addons  | addons-811100 addons disable                                          | addons-811100     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
	|         | ingress-dns --alsologtostderr                                         |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| addons  | addons-811100 addons                                                  | addons-811100     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
	|         | disable volumesnapshots                                               |                   |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                |                   |                   |         |                     |                     |
	| addons  | addons-811100 addons disable                                          | addons-811100     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:38 UTC |
	|         | ingress --alsologtostderr -v=1                                        |                   |                   |         |                     |                     |
	| addons  | addons-811100 addons disable                                          | addons-811100     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:38 UTC | 19 Jul 24 03:38 UTC |
	|         | gcp-auth --alsologtostderr                                            |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| stop    | -p addons-811100                                                      | addons-811100     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:38 UTC | 19 Jul 24 03:39 UTC |
	| addons  | enable dashboard -p                                                   | addons-811100     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:39 UTC | 19 Jul 24 03:39 UTC |
	|         | addons-811100                                                         |                   |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                  | addons-811100     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:39 UTC | 19 Jul 24 03:39 UTC |
	|         | addons-811100                                                         |                   |                   |         |                     |                     |
	| addons  | disable gvisor -p                                                     | addons-811100     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:39 UTC | 19 Jul 24 03:39 UTC |
	|         | addons-811100                                                         |                   |                   |         |                     |                     |
	| delete  | -p addons-811100                                                      | addons-811100     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:39 UTC | 19 Jul 24 03:40 UTC |
	| start   | -p nospam-907600 -n=1 --memory=2250 --wait=false                      | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:40 UTC | 19 Jul 24 03:43 UTC |
	|         | --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                       |                   |                   |         |                     |                     |
	| start   | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:43 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| pause   | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:44 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| unpause | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| delete  | -p nospam-907600                                                      | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	| start   | -p functional-149600                                                  | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:50 UTC |
	|         | --memory=4000                                                         |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                 |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                            |                   |                   |         |                     |                     |
	| start   | -p functional-149600                                                  | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:50 UTC |                     |
	|         | --alsologtostderr -v=8                                                |                   |                   |         |                     |                     |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 03:50:30
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 03:50:30.014967    3696 out.go:291] Setting OutFile to fd 736 ...
	I0719 03:50:30.015716    3696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:50:30.015716    3696 out.go:304] Setting ErrFile to fd 920...
	I0719 03:50:30.015716    3696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:50:30.039559    3696 out.go:298] Setting JSON to false
	I0719 03:50:30.043125    3696 start.go:129] hostinfo: {"hostname":"minikube6","uptime":20056,"bootTime":1721340973,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0719 03:50:30.043125    3696 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 03:50:30.049078    3696 out.go:177] * [functional-149600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 03:50:30.054622    3696 notify.go:220] Checking for updates...
	I0719 03:50:30.058333    3696 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 03:50:30.061118    3696 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 03:50:30.064121    3696 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0719 03:50:30.066177    3696 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 03:50:30.069184    3696 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 03:50:30.074042    3696 config.go:182] Loaded profile config "functional-149600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 03:50:30.074369    3696 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:50:35.621705    3696 out.go:177] * Using the hyperv driver based on existing profile
	I0719 03:50:35.625671    3696 start.go:297] selected driver: hyperv
	I0719 03:50:35.625671    3696 start.go:901] validating driver "hyperv" against &{Name:functional-149600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-149600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.160.82 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:50:35.625671    3696 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 03:50:35.676160    3696 cni.go:84] Creating CNI manager for ""
	I0719 03:50:35.676274    3696 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 03:50:35.676342    3696 start.go:340] cluster config:
	{Name:functional-149600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-149600 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.160.82 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:50:35.676342    3696 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 03:50:35.683357    3696 out.go:177] * Starting "functional-149600" primary control-plane node in "functional-149600" cluster
	I0719 03:50:35.686075    3696 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 03:50:35.686075    3696 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 03:50:35.686075    3696 cache.go:56] Caching tarball of preloaded images
	I0719 03:50:35.686075    3696 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 03:50:35.686075    3696 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 03:50:35.686926    3696 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-149600\config.json ...
	I0719 03:50:35.688692    3696 start.go:360] acquireMachinesLock for functional-149600: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 03:50:35.688692    3696 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-149600"
	I0719 03:50:35.689690    3696 start.go:96] Skipping create...Using existing machine configuration
	I0719 03:50:35.689690    3696 fix.go:54] fixHost starting: 
	I0719 03:50:35.689690    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:38.581698    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:38.581801    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:38.581801    3696 fix.go:112] recreateIfNeeded on functional-149600: state=Running err=<nil>
	W0719 03:50:38.581801    3696 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 03:50:38.589005    3696 out.go:177] * Updating the running hyperv "functional-149600" VM ...
	I0719 03:50:38.591394    3696 machine.go:94] provisionDockerMachine start ...
	I0719 03:50:38.591394    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:40.863423    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:40.863423    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:40.863553    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:50:43.589830    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:50:43.589830    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:43.596398    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:50:43.597572    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:50:43.597572    3696 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 03:50:43.733324    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-149600
	
	I0719 03:50:43.733461    3696 buildroot.go:166] provisioning hostname "functional-149600"
	I0719 03:50:43.733530    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:46.004354    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:46.004354    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:46.004354    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:50:48.635705    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:50:48.635705    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:48.641943    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:50:48.642699    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:50:48.642699    3696 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-149600 && echo "functional-149600" | sudo tee /etc/hostname
	I0719 03:50:48.808147    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-149600
	
	I0719 03:50:48.808147    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:50.983670    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:50.983670    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:50.983670    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:50:53.564554    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:50:53.564554    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:53.570500    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:50:53.571029    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:50:53.571029    3696 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-149600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-149600/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-149600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 03:50:53.715932    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 03:50:53.715932    3696 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 03:50:53.715932    3696 buildroot.go:174] setting up certificates
	I0719 03:50:53.715932    3696 provision.go:84] configureAuth start
	I0719 03:50:53.716479    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:55.878607    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:55.878827    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:55.878961    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:50:58.506063    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:50:58.506342    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:58.506342    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:00.678396    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:00.678396    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:00.678789    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:03.274493    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:03.275498    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:03.275498    3696 provision.go:143] copyHostCerts
	I0719 03:51:03.275498    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0719 03:51:03.276037    3696 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0719 03:51:03.276037    3696 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0719 03:51:03.276654    3696 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 03:51:03.277651    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0719 03:51:03.278183    3696 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0719 03:51:03.278183    3696 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0719 03:51:03.278428    3696 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 03:51:03.279156    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0719 03:51:03.279712    3696 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0719 03:51:03.279842    3696 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0719 03:51:03.280165    3696 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 03:51:03.281113    3696 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-149600 san=[127.0.0.1 172.28.160.82 functional-149600 localhost minikube]
	I0719 03:51:03.689682    3696 provision.go:177] copyRemoteCerts
	I0719 03:51:03.703822    3696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 03:51:03.703822    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:05.944447    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:05.945222    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:05.945222    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:08.655742    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:08.656027    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:08.656027    3696 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 03:51:08.767037    3696 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0631549s)
	I0719 03:51:08.767037    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0719 03:51:08.767037    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 03:51:08.817664    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0719 03:51:08.817664    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 03:51:08.866416    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0719 03:51:08.866625    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 03:51:08.914316    3696 provision.go:87] duration metric: took 15.1982045s to configureAuth
	I0719 03:51:08.914388    3696 buildroot.go:189] setting minikube options for container-runtime
	I0719 03:51:08.914388    3696 config.go:182] Loaded profile config "functional-149600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 03:51:08.915029    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:11.135055    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:11.135661    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:11.135851    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:13.741166    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:13.741166    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:13.746157    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:13.746776    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:13.746776    3696 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 03:51:13.880918    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 03:51:13.880918    3696 buildroot.go:70] root file system type: tmpfs
	I0719 03:51:13.881582    3696 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 03:51:13.881732    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:16.077328    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:16.077328    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:16.078246    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:18.691853    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:18.691853    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:18.698444    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:18.698985    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:18.699205    3696 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 03:51:18.866085    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 03:51:18.866196    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:21.047452    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:21.047757    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:21.047885    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:23.635931    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:23.635931    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:23.641636    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:23.641913    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:23.641913    3696 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 03:51:23.783486    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 03:51:23.783486    3696 machine.go:97] duration metric: took 45.1915583s to provisionDockerMachine
	I0719 03:51:23.783595    3696 start.go:293] postStartSetup for "functional-149600" (driver="hyperv")
	I0719 03:51:23.783595    3696 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 03:51:23.796656    3696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 03:51:23.796656    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:25.981376    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:25.981376    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:25.981376    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:28.598484    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:28.598484    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:28.598544    3696 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 03:51:28.705771    3696 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9090569s)
	I0719 03:51:28.718613    3696 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 03:51:28.725582    3696 command_runner.go:130] > NAME=Buildroot
	I0719 03:51:28.725582    3696 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 03:51:28.725582    3696 command_runner.go:130] > ID=buildroot
	I0719 03:51:28.725582    3696 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 03:51:28.725582    3696 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 03:51:28.725959    3696 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 03:51:28.725959    3696 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 03:51:28.725959    3696 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 03:51:28.727557    3696 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
	I0719 03:51:28.727636    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /etc/ssl/certs/96042.pem
	I0719 03:51:28.728845    3696 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts -> hosts in /etc/test/nested/copy/9604
	I0719 03:51:28.728930    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts -> /etc/test/nested/copy/9604/hosts
	I0719 03:51:28.739078    3696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9604
	I0719 03:51:28.760168    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
	I0719 03:51:28.810185    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts --> /etc/test/nested/copy/9604/hosts (40 bytes)
	I0719 03:51:28.855606    3696 start.go:296] duration metric: took 5.0719507s for postStartSetup
	I0719 03:51:28.855606    3696 fix.go:56] duration metric: took 53.165288s for fixHost
	I0719 03:51:28.855606    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:31.033424    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:31.033992    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:31.033992    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:33.661469    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:33.661469    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:33.666391    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:33.667164    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:33.667164    3696 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 03:51:33.803547    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721361093.813306008
	
	I0719 03:51:33.803653    3696 fix.go:216] guest clock: 1721361093.813306008
	I0719 03:51:33.803653    3696 fix.go:229] Guest: 2024-07-19 03:51:33.813306008 +0000 UTC Remote: 2024-07-19 03:51:28.8556061 +0000 UTC m=+59.006897101 (delta=4.957699908s)
	I0719 03:51:33.803796    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:35.994681    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:35.995703    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:35.995726    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:38.620465    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:38.620535    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:38.625233    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:38.625457    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:38.625457    3696 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721361093
	I0719 03:51:38.774641    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 03:51:33 UTC 2024
	
	I0719 03:51:38.774641    3696 fix.go:236] clock set: Fri Jul 19 03:51:33 UTC 2024
	 (err=<nil>)
	I0719 03:51:38.774738    3696 start.go:83] releasing machines lock for "functional-149600", held for 1m3.0853019s
	I0719 03:51:38.774962    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:40.997351    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:40.997570    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:40.997570    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:43.631283    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:43.631283    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:43.635455    3696 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 03:51:43.635455    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:43.646136    3696 ssh_runner.go:195] Run: cat /version.json
	I0719 03:51:43.646827    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:45.897790    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:45.898865    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:45.898924    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:45.905632    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:45.905632    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:45.906182    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:48.659880    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:48.659880    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:48.661005    3696 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 03:51:48.685815    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:48.685890    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:48.685890    3696 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 03:51:48.760223    3696 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0719 03:51:48.760223    3696 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1247074s)
	W0719 03:51:48.760467    3696 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 03:51:48.777389    3696 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0719 03:51:48.778342    3696 ssh_runner.go:235] Completed: cat /version.json: (5.1321451s)
	I0719 03:51:48.790179    3696 ssh_runner.go:195] Run: systemctl --version
	I0719 03:51:48.799025    3696 command_runner.go:130] > systemd 252 (252)
	I0719 03:51:48.799025    3696 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0719 03:51:48.809673    3696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 03:51:48.817402    3696 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0719 03:51:48.818131    3696 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 03:51:48.831435    3696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 03:51:48.850859    3696 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 03:51:48.850859    3696 start.go:495] detecting cgroup driver to use...
	I0719 03:51:48.851103    3696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0719 03:51:48.877177    3696 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 03:51:48.877177    3696 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 03:51:48.893340    3696 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0719 03:51:48.904541    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 03:51:48.935991    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 03:51:48.954279    3696 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 03:51:48.967927    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 03:51:48.997865    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 03:51:49.026438    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 03:51:49.072524    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 03:51:49.117543    3696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 03:51:49.154251    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 03:51:49.188018    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 03:51:49.222803    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 03:51:49.261427    3696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 03:51:49.282134    3696 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 03:51:49.294367    3696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 03:51:49.330587    3696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:51:49.594056    3696 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 03:51:49.632649    3696 start.go:495] detecting cgroup driver to use...
	I0719 03:51:49.645484    3696 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 03:51:49.668125    3696 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0719 03:51:49.668402    3696 command_runner.go:130] > [Unit]
	I0719 03:51:49.668402    3696 command_runner.go:130] > Description=Docker Application Container Engine
	I0719 03:51:49.668402    3696 command_runner.go:130] > Documentation=https://docs.docker.com
	I0719 03:51:49.668402    3696 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0719 03:51:49.668497    3696 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0719 03:51:49.668497    3696 command_runner.go:130] > StartLimitBurst=3
	I0719 03:51:49.668497    3696 command_runner.go:130] > StartLimitIntervalSec=60
	I0719 03:51:49.668497    3696 command_runner.go:130] > [Service]
	I0719 03:51:49.668497    3696 command_runner.go:130] > Type=notify
	I0719 03:51:49.668497    3696 command_runner.go:130] > Restart=on-failure
	I0719 03:51:49.668497    3696 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0719 03:51:49.668497    3696 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0719 03:51:49.668497    3696 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0719 03:51:49.668497    3696 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0719 03:51:49.668497    3696 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0719 03:51:49.668497    3696 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0719 03:51:49.668497    3696 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0719 03:51:49.668497    3696 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0719 03:51:49.668497    3696 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0719 03:51:49.668497    3696 command_runner.go:130] > ExecStart=
	I0719 03:51:49.668497    3696 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0719 03:51:49.668497    3696 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0719 03:51:49.668497    3696 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0719 03:51:49.668497    3696 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0719 03:51:49.668497    3696 command_runner.go:130] > LimitNOFILE=infinity
	I0719 03:51:49.668497    3696 command_runner.go:130] > LimitNPROC=infinity
	I0719 03:51:49.668497    3696 command_runner.go:130] > LimitCORE=infinity
	I0719 03:51:49.668497    3696 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0719 03:51:49.668497    3696 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0719 03:51:49.668497    3696 command_runner.go:130] > TasksMax=infinity
	I0719 03:51:49.668497    3696 command_runner.go:130] > TimeoutStartSec=0
	I0719 03:51:49.668497    3696 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0719 03:51:49.668497    3696 command_runner.go:130] > Delegate=yes
	I0719 03:51:49.669031    3696 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0719 03:51:49.669031    3696 command_runner.go:130] > KillMode=process
	I0719 03:51:49.669031    3696 command_runner.go:130] > [Install]
	I0719 03:51:49.669031    3696 command_runner.go:130] > WantedBy=multi-user.target
	I0719 03:51:49.680959    3696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 03:51:49.714100    3696 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 03:51:49.772216    3696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 03:51:49.806868    3696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 03:51:49.828840    3696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 03:51:49.861009    3696 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0719 03:51:49.874179    3696 ssh_runner.go:195] Run: which cri-dockerd
	I0719 03:51:49.879587    3696 command_runner.go:130] > /usr/bin/cri-dockerd
	I0719 03:51:49.890138    3696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 03:51:49.907472    3696 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 03:51:49.956150    3696 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 03:51:50.235400    3696 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 03:51:50.503397    3696 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 03:51:50.503594    3696 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 03:51:50.548434    3696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:51:50.826918    3696 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 03:53:02.223808    3696 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0719 03:53:02.223949    3696 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0719 03:53:02.224012    3696 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3961883s)
	I0719 03:53:02.236953    3696 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0719 03:53:02.270395    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	I0719 03:53:02.270395    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.531914350Z" level=info msg="Starting up"
	I0719 03:53:02.270707    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.534422132Z" level=info msg="containerd not running, starting managed containerd"
	I0719 03:53:02.270707    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.535803677Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=684
	I0719 03:53:02.270775    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.567717825Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0719 03:53:02.270775    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594617108Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0719 03:53:02.270775    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594655809Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0719 03:53:02.270913    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594718511Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0719 03:53:02.270913    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594736112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.270913    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594817914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.270991    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595026521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.270991    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595269429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.271066    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595407134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271066    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595431535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.271066    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595445135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271174    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595540038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271174    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595881749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271283    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598812246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.271283    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598917149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271348    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599162457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.271348    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599284261Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0719 03:53:02.271420    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599462867Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0719 03:53:02.271420    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599605372Z" level=info msg="metadata content store policy set" policy=shared
	I0719 03:53:02.271420    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625338316Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0719 03:53:02.271510    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625549423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0719 03:53:02.271510    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625577124Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0719 03:53:02.271510    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625596425Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0719 03:53:02.271510    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625614725Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0719 03:53:02.271580    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625734329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0719 03:53:02.271580    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626111642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.271580    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626552556Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.271651    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626708661Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0719 03:53:02.271651    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626731962Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0719 03:53:02.271722    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626749163Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271722    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626764763Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271722    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626779864Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271793    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626807165Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271793    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626826665Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271793    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626842566Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271879    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626857266Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271879    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626871767Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271983    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626901168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.271983    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626925168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.271983    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626942469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626958269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626972470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626986970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627018171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627050773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627067473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272189    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627087974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272189    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627102874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272189    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627118075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272189    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627133475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272278    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627151576Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0719 03:53:02.272278    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627179977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272278    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627207478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272385    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627223378Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0719 03:53:02.272385    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627308681Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0719 03:53:02.272385    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627497987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0719 03:53:02.272456    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627603491Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0719 03:53:02.272456    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627628491Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0719 03:53:02.272456    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627642192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272527    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627659693Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0719 03:53:02.272527    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627677693Z" level=info msg="NRI interface is disabled by configuration."
	I0719 03:53:02.272527    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628139708Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0719 03:53:02.272598    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628464419Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0719 03:53:02.272598    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628586223Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0719 03:53:02.272598    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628648825Z" level=info msg="containerd successfully booted in 0.062295s"
	I0719 03:53:02.272668    3696 command_runner.go:130] > Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.605880874Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0719 03:53:02.272668    3696 command_runner.go:130] > Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.640734047Z" level=info msg="Loading containers: start."
	I0719 03:53:02.272741    3696 command_runner.go:130] > Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.813575066Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0719 03:53:02.272741    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.031273218Z" level=info msg="Loading containers: done."
	I0719 03:53:02.272741    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052569890Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052711603Z" level=info msg="Daemon has completed initialization"
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174428772Z" level=info msg="API listen on /var/run/docker.sock"
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 systemd[1]: Started Docker Application Container Engine.
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174659093Z" level=info msg="API listen on [::]:2376"
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	I0719 03:53:02.272906    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.327916124Z" level=info msg="Processing signal 'terminated'"
	I0719 03:53:02.272906    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.330803748Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0719 03:53:02.272906    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332114659Z" level=info msg="Daemon shutdown complete"
	I0719 03:53:02.272978    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332413462Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0719 03:53:02.273055    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332761765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0719 03:53:02.273055    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	I0719 03:53:02.273055    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	I0719 03:53:02.273055    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	I0719 03:53:02.273128    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.396667332Z" level=info msg="Starting up"
	I0719 03:53:02.273128    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.397798042Z" level=info msg="containerd not running, starting managed containerd"
	I0719 03:53:02.273201    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.402462181Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1096
	I0719 03:53:02.273201    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.432470534Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0719 03:53:02.273273    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459514962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0719 03:53:02.273273    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459615563Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0719 03:53:02.273273    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459667563Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0719 03:53:02.273345    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459682563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273345    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459912965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273418    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459936465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273418    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460088967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273418    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460343269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273495    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460374469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273495    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460396969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273495    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460425770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273569    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460819273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273569    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463853798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273642    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463983400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273642    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464200501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464295002Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464331702Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464352803Z" level=info msg="metadata content store policy set" policy=shared
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464795906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464850207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464884207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0719 03:53:02.273929    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464929707Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0719 03:53:02.273929    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464948008Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0719 03:53:02.273929    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465078809Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0719 03:53:02.273929    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465467012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.274022    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465770315Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.274055    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465863515Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465884716Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465898416Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465911416Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465922816Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465936016Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465964116Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465979716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465991216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466002317Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466032417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466048417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466060817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466073817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466093917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466108217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466120718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466132618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466145718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466159818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466170818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466182018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466193718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466207818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466226918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466239919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466250719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466362320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466382920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466470120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0719 03:53:02.274821    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466490821Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0719 03:53:02.274821    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466502121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274898    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466521321Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0719 03:53:02.274898    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466787523Z" level=info msg="NRI interface is disabled by configuration."
	I0719 03:53:02.274898    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467170726Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0719 03:53:02.274898    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467422729Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0719 03:53:02.274989    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467502129Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0719 03:53:02.274989    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467596330Z" level=info msg="containerd successfully booted in 0.035978s"
	I0719 03:53:02.274989    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.446816884Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0719 03:53:02.275065    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.479266357Z" level=info msg="Loading containers: start."
	I0719 03:53:02.275065    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.611087768Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0719 03:53:02.275139    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.727699751Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0719 03:53:02.275139    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.826817487Z" level=info msg="Loading containers: done."
	I0719 03:53:02.275139    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851788197Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0719 03:53:02.275139    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851961999Z" level=info msg="Daemon has completed initialization"
	I0719 03:53:02.275211    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902179022Z" level=info msg="API listen on /var/run/docker.sock"
	I0719 03:53:02.275211    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 systemd[1]: Started Docker Application Container Engine.
	I0719 03:53:02.275211    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902385724Z" level=info msg="API listen on [::]:2376"
	I0719 03:53:02.275286    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.464303420Z" level=info msg="Processing signal 'terminated'"
	I0719 03:53:02.275286    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466178836Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0719 03:53:02.275286    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466444238Z" level=info msg="Daemon shutdown complete"
	I0719 03:53:02.275358    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466617340Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0719 03:53:02.275358    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466645140Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0719 03:53:02.275358    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	I0719 03:53:02.275430    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	I0719 03:53:02.275430    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	I0719 03:53:02.275430    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	I0719 03:53:02.275557    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.526170370Z" level=info msg="Starting up"
	I0719 03:53:02.275580    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.527875185Z" level=info msg="containerd not running, starting managed containerd"
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.529085595Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1449
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.561806771Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.588986100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589119201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589175201Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589189602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589217102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589231002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589372603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589466304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589487004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589498304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589522204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589693506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.592940233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593046334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593164935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593288836Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593325336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593343737Z" level=info msg="metadata content store policy set" policy=shared
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593655039Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593711040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593798040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593840041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593855841Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593915841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594246644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594583947Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594609647Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594625347Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594640447Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276335    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594659648Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276335    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594674648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594689748Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594715248Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594831949Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594864649Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594894750Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594912750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594927150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594938550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594949850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594961050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594988850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594999351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595010451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595022151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595034451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595044251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595054151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595064451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595080551Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595100051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595112651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595122752Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595253153Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595360854Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595377754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595405554Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595414854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595426254Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595435454Z" level=info msg="NRI interface is disabled by configuration."
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595711057Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595836558Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595937958Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595991659Z" level=info msg="containerd successfully booted in 0.035148s"
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:45 functional-149600 dockerd[1443]: time="2024-07-19T03:49:45.571450281Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0719 03:53:02.277129    3696 command_runner.go:130] > Jul 19 03:49:48 functional-149600 dockerd[1443]: time="2024-07-19T03:49:48.883728000Z" level=info msg="Loading containers: start."
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.006401134Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.127192752Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.218929925Z" level=info msg="Loading containers: done."
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249486583Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249683984Z" level=info msg="Daemon has completed initialization"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299922608Z" level=info msg="API listen on /var/run/docker.sock"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 systemd[1]: Started Docker Application Container Engine.
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299991408Z" level=info msg="API listen on [::]:2376"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812314634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812468840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.813783594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.814181811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.823808405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826750026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826767127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826866331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899025089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899127893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899277199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899669815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918254477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918562790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.920597373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.921124695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387701734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387801838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387829539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387963045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436646441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436931752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437090859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437275166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539345743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539671255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.540445185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.541481126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550468276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550879792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.551210305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.555850986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.238972986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239834399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239916700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.240127804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589855933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589966535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589987335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.590436642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002502056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278448    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002639758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278487    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002654558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278487    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.003059965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278487    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053221935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053490639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053805144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.054875960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794781741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794871142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794886242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794980442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806139221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806918426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807029827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807551631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.625163713Z" level=info msg="ignoring event" container=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.629961233Z" level=info msg="shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631114094Z" level=warning msg="cleaning up after shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631402359Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.674442159Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:50:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.838943371Z" level=info msg="ignoring event" container=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.839886257Z" level=info msg="shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840028839Z" level=warning msg="cleaning up after shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840046637Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303237678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303415569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.304773802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.305273178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593684961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593784156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593803755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.279165    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593913350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:50 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:50 functional-149600 dockerd[1443]: time="2024-07-19T03:51:50.861615472Z" level=info msg="Processing signal 'terminated'"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079285636Z" level=info msg="shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079436436Z" level=warning msg="cleaning up after shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079453436Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.080991335Z" level=info msg="ignoring event" container=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.090996234Z" level=info msg="ignoring event" container=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091838634Z" level=info msg="shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091953834Z" level=warning msg="cleaning up after shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091968234Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.112734230Z" level=info msg="ignoring event" container=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116127330Z" level=info msg="shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116200030Z" level=warning msg="cleaning up after shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116210930Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116537230Z" level=info msg="shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116585530Z" level=warning msg="cleaning up after shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116614030Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116823530Z" level=info msg="ignoring event" container=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116849530Z" level=info msg="ignoring event" container=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116946330Z" level=info msg="ignoring event" container=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116988930Z" level=info msg="ignoring event" container=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122714429Z" level=info msg="shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122848129Z" level=warning msg="cleaning up after shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122861729Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128254128Z" level=info msg="shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128388728Z" level=warning msg="cleaning up after shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128443128Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131550327Z" level=info msg="shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131620327Z" level=warning msg="cleaning up after shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131665527Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148015624Z" level=info msg="shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148155124Z" level=warning msg="cleaning up after shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148209624Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182402919Z" level=info msg="shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182503119Z" level=warning msg="cleaning up after shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182514319Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183465819Z" level=info msg="shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183548319Z" level=warning msg="cleaning up after shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183560019Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185575018Z" level=info msg="ignoring event" container=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185722318Z" level=info msg="ignoring event" container=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185758518Z" level=info msg="ignoring event" container=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185811118Z" level=info msg="ignoring event" container=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185852418Z" level=info msg="ignoring event" container=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186041918Z" level=info msg="shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186095318Z" level=warning msg="cleaning up after shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186139118Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187552418Z" level=info msg="shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187672518Z" level=warning msg="cleaning up after shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187687018Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987746429Z" level=info msg="shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987797629Z" level=warning msg="cleaning up after shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987859329Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:55 functional-149600 dockerd[1443]: time="2024-07-19T03:51:55.988258129Z" level=info msg="ignoring event" container=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:56 functional-149600 dockerd[1449]: time="2024-07-19T03:51:56.011512525Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:51:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.013086308Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.070091705Z" level=info msg="ignoring event" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.070778533Z" level=info msg="shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071068387Z" level=warning msg="cleaning up after shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071124597Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.147257850Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148746827Z" level=info msg="Daemon shutdown complete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148999274Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.149087991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Consumed 5.480s CPU time.
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 dockerd[4315]: time="2024-07-19T03:52:02.207309394Z" level=info msg="Starting up"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:53:02 functional-149600 dockerd[4315]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:53:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	I0719 03:53:02.309205    3696 out.go:177] 
	W0719 03:53:02.311207    3696 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 03:48:58 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.531914350Z" level=info msg="Starting up"
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.534422132Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.535803677Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=684
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.567717825Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594617108Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594655809Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594718511Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594736112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594817914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595026521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595269429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595407134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595431535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595445135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595540038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595881749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598812246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598917149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599162457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599284261Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599462867Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599605372Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625338316Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625549423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625577124Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625596425Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625614725Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625734329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626111642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626552556Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626708661Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626731962Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626749163Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626764763Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626779864Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626807165Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626826665Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626842566Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626857266Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626871767Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626901168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626925168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626942469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626958269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626972470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626986970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627018171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627050773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627067473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627087974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627102874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627118075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627133475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627151576Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627179977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627207478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627223378Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627308681Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627497987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627603491Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627628491Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627642192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627659693Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627677693Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628139708Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628464419Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628586223Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628648825Z" level=info msg="containerd successfully booted in 0.062295s"
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.605880874Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.640734047Z" level=info msg="Loading containers: start."
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.813575066Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.031273218Z" level=info msg="Loading containers: done."
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052569890Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052711603Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174428772Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:00 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174659093Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:31 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.327916124Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.330803748Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332114659Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332413462Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332761765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:49:32 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:49:32 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:49:32 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.396667332Z" level=info msg="Starting up"
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.397798042Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.402462181Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1096
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.432470534Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459514962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459615563Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459667563Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459682563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459912965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459936465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460088967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460343269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460374469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460396969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460425770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460819273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463853798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463983400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464200501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464295002Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464331702Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464352803Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464795906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464850207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464884207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464929707Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464948008Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465078809Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465467012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465770315Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465863515Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465884716Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465898416Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465911416Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465922816Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465936016Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465964116Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465979716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465991216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466002317Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466032417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466048417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466060817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466073817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466093917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466108217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466120718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466132618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466145718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466159818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466170818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466182018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466193718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466207818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466226918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466239919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466250719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466362320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466382920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466470120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466490821Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466502121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466521321Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466787523Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467170726Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467422729Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467502129Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467596330Z" level=info msg="containerd successfully booted in 0.035978s"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.446816884Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.479266357Z" level=info msg="Loading containers: start."
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.611087768Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.727699751Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.826817487Z" level=info msg="Loading containers: done."
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851788197Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851961999Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902179022Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:33 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902385724Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.464303420Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466178836Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466444238Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466617340Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466645140Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:49:43 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:49:44 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:49:44 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:49:44 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.526170370Z" level=info msg="Starting up"
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.527875185Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.529085595Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1449
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.561806771Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.588986100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589119201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589175201Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589189602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589217102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589231002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589372603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589466304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589487004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589498304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589522204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589693506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.592940233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593046334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593164935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593288836Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593325336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593343737Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593655039Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593711040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593798040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593840041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593855841Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593915841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594246644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594583947Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594609647Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594625347Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594640447Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594659648Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594674648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594689748Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594715248Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594831949Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594864649Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594894750Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594912750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594927150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594938550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594949850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594961050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594988850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594999351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595010451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595022151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595034451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595044251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595054151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595064451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595080551Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595100051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595112651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595122752Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595253153Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595360854Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595377754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595405554Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595414854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595426254Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595435454Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595711057Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595836558Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595937958Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595991659Z" level=info msg="containerd successfully booted in 0.035148s"
	Jul 19 03:49:45 functional-149600 dockerd[1443]: time="2024-07-19T03:49:45.571450281Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:49:48 functional-149600 dockerd[1443]: time="2024-07-19T03:49:48.883728000Z" level=info msg="Loading containers: start."
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.006401134Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.127192752Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.218929925Z" level=info msg="Loading containers: done."
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249486583Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249683984Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299922608Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:49 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299991408Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812314634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812468840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.813783594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.814181811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.823808405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826750026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826767127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826866331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899025089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899127893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899277199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899669815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918254477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918562790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.920597373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.921124695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387701734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387801838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387829539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387963045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436646441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436931752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437090859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437275166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539345743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539671255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.540445185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.541481126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550468276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550879792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.551210305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.555850986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.238972986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239834399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239916700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.240127804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589855933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589966535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589987335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.590436642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002502056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002639758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002654558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.003059965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053221935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053490639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053805144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.054875960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794781741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794871142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794886242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794980442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806139221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806918426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807029827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807551631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.625163713Z" level=info msg="ignoring event" container=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.629961233Z" level=info msg="shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631114094Z" level=warning msg="cleaning up after shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631402359Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.674442159Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:50:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.838943371Z" level=info msg="ignoring event" container=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.839886257Z" level=info msg="shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840028839Z" level=warning msg="cleaning up after shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840046637Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303237678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303415569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.304773802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.305273178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593684961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593784156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593803755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593913350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:51:50 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:51:50 functional-149600 dockerd[1443]: time="2024-07-19T03:51:50.861615472Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079285636Z" level=info msg="shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079436436Z" level=warning msg="cleaning up after shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079453436Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.080991335Z" level=info msg="ignoring event" container=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.090996234Z" level=info msg="ignoring event" container=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091838634Z" level=info msg="shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091953834Z" level=warning msg="cleaning up after shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091968234Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.112734230Z" level=info msg="ignoring event" container=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116127330Z" level=info msg="shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116200030Z" level=warning msg="cleaning up after shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116210930Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116537230Z" level=info msg="shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116585530Z" level=warning msg="cleaning up after shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116614030Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116823530Z" level=info msg="ignoring event" container=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116849530Z" level=info msg="ignoring event" container=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116946330Z" level=info msg="ignoring event" container=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116988930Z" level=info msg="ignoring event" container=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122714429Z" level=info msg="shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122848129Z" level=warning msg="cleaning up after shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122861729Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128254128Z" level=info msg="shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128388728Z" level=warning msg="cleaning up after shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128443128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131550327Z" level=info msg="shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131620327Z" level=warning msg="cleaning up after shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131665527Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148015624Z" level=info msg="shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148155124Z" level=warning msg="cleaning up after shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148209624Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182402919Z" level=info msg="shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182503119Z" level=warning msg="cleaning up after shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182514319Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183465819Z" level=info msg="shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183548319Z" level=warning msg="cleaning up after shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183560019Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185575018Z" level=info msg="ignoring event" container=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185722318Z" level=info msg="ignoring event" container=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185758518Z" level=info msg="ignoring event" container=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185811118Z" level=info msg="ignoring event" container=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185852418Z" level=info msg="ignoring event" container=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186041918Z" level=info msg="shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186095318Z" level=warning msg="cleaning up after shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186139118Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187552418Z" level=info msg="shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187672518Z" level=warning msg="cleaning up after shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187687018Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987746429Z" level=info msg="shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987797629Z" level=warning msg="cleaning up after shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987859329Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1443]: time="2024-07-19T03:51:55.988258129Z" level=info msg="ignoring event" container=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:56 functional-149600 dockerd[1449]: time="2024-07-19T03:51:56.011512525Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:51:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.013086308Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.070091705Z" level=info msg="ignoring event" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.070778533Z" level=info msg="shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071068387Z" level=warning msg="cleaning up after shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071124597Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.147257850Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148746827Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148999274Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.149087991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:52:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Consumed 5.480s CPU time.
	Jul 19 03:52:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:52:02 functional-149600 dockerd[4315]: time="2024-07-19T03:52:02.207309394Z" level=info msg="Starting up"
	Jul 19 03:53:02 functional-149600 dockerd[4315]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:53:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0719 03:53:02.313026    3696 out.go:239] * 
	W0719 03:53:02.314501    3696 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 03:53:02.320481    3696 out.go:177] 
	
	
	==> Docker <==
	Jul 19 03:54:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:54:02 functional-149600 dockerd[4794]: time="2024-07-19T03:54:02.625522087Z" level=info msg="Starting up"
	Jul 19 03:55:02 functional-149600 dockerd[4794]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:55:02 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:55:02Z" level=error msg="error getting RW layer size for container ID '86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:55:02 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:55:02Z" level=error msg="Set backoffDuration to : 1m0s for container ID '86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d'"
	Jul 19 03:55:02 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:55:02Z" level=error msg="error getting RW layer size for container ID '73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:55:02 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:55:02Z" level=error msg="Set backoffDuration to : 1m0s for container ID '73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0'"
	Jul 19 03:55:02 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:55:02Z" level=error msg="error getting RW layer size for container ID '896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:55:02 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:55:02Z" level=error msg="Set backoffDuration to : 1m0s for container ID '896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f'"
	Jul 19 03:55:02 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:55:02Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:55:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:55:02 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:55:02Z" level=error msg="error getting RW layer size for container ID 'db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:55:02 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:55:02Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703'"
	Jul 19 03:55:02 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:55:02Z" level=error msg="error getting RW layer size for container ID '905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:55:02 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:55:02Z" level=error msg="Set backoffDuration to : 1m0s for container ID '905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2'"
	Jul 19 03:55:02 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:55:02Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 19 03:55:02 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:55:02Z" level=error msg="error getting RW layer size for container ID '4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:55:02 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:55:02Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26'"
	Jul 19 03:55:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:55:02 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:55:02Z" level=error msg="error getting RW layer size for container ID '2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:55:02 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:55:02Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f'"
	Jul 19 03:55:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:55:02 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Jul 19 03:55:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:55:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-19T03:55:04Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.525666] systemd-fstab-generator[1054]: Ignoring "noauto" option for root device
	[  +0.198552] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	[  +0.233106] systemd-fstab-generator[1081]: Ignoring "noauto" option for root device
	[  +2.882289] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.217497] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +0.196783] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[  +0.258312] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +8.589795] systemd-fstab-generator[1435]: Ignoring "noauto" option for root device
	[  +0.109572] kauditd_printk_skb: 202 callbacks suppressed
	[  +5.479934] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.746047] systemd-fstab-generator[1680]: Ignoring "noauto" option for root device
	[  +6.463791] systemd-fstab-generator[1887]: Ignoring "noauto" option for root device
	[  +0.101637] kauditd_printk_skb: 48 callbacks suppressed
	[Jul19 03:50] systemd-fstab-generator[2289]: Ignoring "noauto" option for root device
	[  +0.137056] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.913934] systemd-fstab-generator[2516]: Ignoring "noauto" option for root device
	[  +0.188713] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.060318] hrtimer: interrupt took 3867561 ns
	[  +7.580998] kauditd_printk_skb: 90 callbacks suppressed
	[Jul19 03:51] systemd-fstab-generator[3837]: Ignoring "noauto" option for root device
	[  +0.149840] kauditd_printk_skb: 10 callbacks suppressed
	[  +0.466272] systemd-fstab-generator[3873]: Ignoring "noauto" option for root device
	[  +0.296379] systemd-fstab-generator[3899]: Ignoring "noauto" option for root device
	[  +0.316733] systemd-fstab-generator[3913]: Ignoring "noauto" option for root device
	[  +5.318922] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 03:56:03 up 8 min,  0 users,  load average: 0.01, 0.17, 0.13
	Linux functional-149600 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.651372    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?resourceVersion=0&timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.652443    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.653482    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.655020    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.656128    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.656217    2296 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.906899    2296 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.907200    2296 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: I0719 03:56:02.907287    2296 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.907432    2296 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.907499    2296 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.907588    2296 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.907687    2296 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: I0719 03:56:02.907739    2296 image_gc_manager.go:214] "Failed to monitor images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.907805    2296 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.907858    2296 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.907993    2296 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.908062    2296 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.908145    2296 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.908223    2296 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.908350    2296 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.909515    2296 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.910325    2296 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.911234    2296 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 19 03:56:02 functional-149600 kubelet[2296]: E0719 03:56:02.911412    2296 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:53:15.236024    7472 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0719 03:54:02.446317    7472 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 03:54:02.479115    7472 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 03:54:02.512728    7472 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 03:54:02.544802    7472 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 03:55:02.644153    7472 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 03:55:02.684909    7472 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 03:55:02.715980    7472 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 03:55:02.746458    7472 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-149600 -n functional-149600
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-149600 -n functional-149600: exit status 2 (12.0770728s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:56:03.749371    2700 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-149600" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (345.87s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (181.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-149600 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-149600 get po -A: exit status 1 (10.3735265s)

                                                
                                                
** stderr ** 
	E0719 03:56:18.004282    7712 memcache.go:265] couldn't get current server API group list: Get "https://172.28.160.82:8441/api?timeout=32s": dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.
	E0719 03:56:20.118704    7712 memcache.go:265] couldn't get current server API group list: Get "https://172.28.160.82:8441/api?timeout=32s": dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.
	E0719 03:56:22.137536    7712 memcache.go:265] couldn't get current server API group list: Get "https://172.28.160.82:8441/api?timeout=32s": dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.
	E0719 03:56:24.173743    7712 memcache.go:265] couldn't get current server API group list: Get "https://172.28.160.82:8441/api?timeout=32s": dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.
	E0719 03:56:26.215013    7712 memcache.go:265] couldn't get current server API group list: Get "https://172.28.160.82:8441/api?timeout=32s": dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.
	Unable to connect to the server: dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-149600 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"E0719 03:56:18.004282    7712 memcache.go:265] couldn't get current server API group list: Get \"https://172.28.160.82:8441/api?timeout=32s\": dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.\nE0719 03:56:20.118704    7712 memcache.go:265] couldn't get current server API group list: Get \"https://172.28.160.82:8441/api?timeout=32s\": dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.\nE0719 03:56:22.137536    7712 memcache.go:265] couldn't get current server API group list: Get \"https://172.28.160.82:8441/api?timeout=32s\": dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.\nE0719 03:56:24.173743    7712 memcache.go:265] couldn't get current server API group list: Get \"https://172.28.160.82:8441/api?timeout=32s\": dial tcp 172.28.160.82:8441: connec
tex: No connection could be made because the target machine actively refused it.\nE0719 03:56:26.215013    7712 memcache.go:265] couldn't get current server API group list: Get \"https://172.28.160.82:8441/api?timeout=32s\": dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.\nUnable to connect to the server: dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.\n"*: args "kubectl --context functional-149600 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-149600 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-149600 -n functional-149600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-149600 -n functional-149600: exit status 2 (12.053223s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:56:26.322004    2228 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-149600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-149600 logs -n 25: (2m25.5653353s)
helpers_test.go:252: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                 Args                                  |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| addons  | addons-811100 addons disable                                          | addons-811100     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
	|         | ingress-dns --alsologtostderr                                         |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| addons  | addons-811100 addons                                                  | addons-811100     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
	|         | disable volumesnapshots                                               |                   |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                |                   |                   |         |                     |                     |
	| addons  | addons-811100 addons disable                                          | addons-811100     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:38 UTC |
	|         | ingress --alsologtostderr -v=1                                        |                   |                   |         |                     |                     |
	| addons  | addons-811100 addons disable                                          | addons-811100     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:38 UTC | 19 Jul 24 03:38 UTC |
	|         | gcp-auth --alsologtostderr                                            |                   |                   |         |                     |                     |
	|         | -v=1                                                                  |                   |                   |         |                     |                     |
	| stop    | -p addons-811100                                                      | addons-811100     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:38 UTC | 19 Jul 24 03:39 UTC |
	| addons  | enable dashboard -p                                                   | addons-811100     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:39 UTC | 19 Jul 24 03:39 UTC |
	|         | addons-811100                                                         |                   |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                  | addons-811100     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:39 UTC | 19 Jul 24 03:39 UTC |
	|         | addons-811100                                                         |                   |                   |         |                     |                     |
	| addons  | disable gvisor -p                                                     | addons-811100     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:39 UTC | 19 Jul 24 03:39 UTC |
	|         | addons-811100                                                         |                   |                   |         |                     |                     |
	| delete  | -p addons-811100                                                      | addons-811100     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:39 UTC | 19 Jul 24 03:40 UTC |
	| start   | -p nospam-907600 -n=1 --memory=2250 --wait=false                      | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:40 UTC | 19 Jul 24 03:43 UTC |
	|         | --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                       |                   |                   |         |                     |                     |
	| start   | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:43 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| start   | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | start --dry-run                                                       |                   |                   |         |                     |                     |
	| pause   | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:44 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| pause   | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | pause                                                                 |                   |                   |         |                     |                     |
	| unpause | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| unpause | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | unpause                                                               |                   |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                               | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600           |                   |                   |         |                     |                     |
	|         | stop                                                                  |                   |                   |         |                     |                     |
	| delete  | -p nospam-907600                                                      | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	| start   | -p functional-149600                                                  | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:50 UTC |
	|         | --memory=4000                                                         |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                 |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                            |                   |                   |         |                     |                     |
	| start   | -p functional-149600                                                  | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:50 UTC |                     |
	|         | --alsologtostderr -v=8                                                |                   |                   |         |                     |                     |
	|---------|-----------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 03:50:30
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 03:50:30.014967    3696 out.go:291] Setting OutFile to fd 736 ...
	I0719 03:50:30.015716    3696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:50:30.015716    3696 out.go:304] Setting ErrFile to fd 920...
	I0719 03:50:30.015716    3696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:50:30.039559    3696 out.go:298] Setting JSON to false
	I0719 03:50:30.043125    3696 start.go:129] hostinfo: {"hostname":"minikube6","uptime":20056,"bootTime":1721340973,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0719 03:50:30.043125    3696 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 03:50:30.049078    3696 out.go:177] * [functional-149600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 03:50:30.054622    3696 notify.go:220] Checking for updates...
	I0719 03:50:30.058333    3696 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 03:50:30.061118    3696 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 03:50:30.064121    3696 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0719 03:50:30.066177    3696 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 03:50:30.069184    3696 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 03:50:30.074042    3696 config.go:182] Loaded profile config "functional-149600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 03:50:30.074369    3696 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:50:35.621705    3696 out.go:177] * Using the hyperv driver based on existing profile
	I0719 03:50:35.625671    3696 start.go:297] selected driver: hyperv
	I0719 03:50:35.625671    3696 start.go:901] validating driver "hyperv" against &{Name:functional-149600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-149600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.160.82 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:50:35.625671    3696 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 03:50:35.676160    3696 cni.go:84] Creating CNI manager for ""
	I0719 03:50:35.676274    3696 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 03:50:35.676342    3696 start.go:340] cluster config:
	{Name:functional-149600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-149600 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.160.82 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:50:35.676342    3696 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 03:50:35.683357    3696 out.go:177] * Starting "functional-149600" primary control-plane node in "functional-149600" cluster
	I0719 03:50:35.686075    3696 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 03:50:35.686075    3696 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 03:50:35.686075    3696 cache.go:56] Caching tarball of preloaded images
	I0719 03:50:35.686075    3696 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 03:50:35.686075    3696 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 03:50:35.686926    3696 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-149600\config.json ...
	I0719 03:50:35.688692    3696 start.go:360] acquireMachinesLock for functional-149600: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 03:50:35.688692    3696 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-149600"
	I0719 03:50:35.689690    3696 start.go:96] Skipping create...Using existing machine configuration
	I0719 03:50:35.689690    3696 fix.go:54] fixHost starting: 
	I0719 03:50:35.689690    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:38.581698    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:38.581801    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:38.581801    3696 fix.go:112] recreateIfNeeded on functional-149600: state=Running err=<nil>
	W0719 03:50:38.581801    3696 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 03:50:38.589005    3696 out.go:177] * Updating the running hyperv "functional-149600" VM ...
	I0719 03:50:38.591394    3696 machine.go:94] provisionDockerMachine start ...
	I0719 03:50:38.591394    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:40.863423    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:40.863423    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:40.863553    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:50:43.589830    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:50:43.589830    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:43.596398    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:50:43.597572    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:50:43.597572    3696 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 03:50:43.733324    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-149600
	
	I0719 03:50:43.733461    3696 buildroot.go:166] provisioning hostname "functional-149600"
	I0719 03:50:43.733530    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:46.004354    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:46.004354    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:46.004354    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:50:48.635705    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:50:48.635705    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:48.641943    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:50:48.642699    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:50:48.642699    3696 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-149600 && echo "functional-149600" | sudo tee /etc/hostname
	I0719 03:50:48.808147    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-149600
	
	I0719 03:50:48.808147    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:50.983670    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:50.983670    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:50.983670    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:50:53.564554    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:50:53.564554    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:53.570500    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:50:53.571029    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:50:53.571029    3696 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-149600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-149600/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-149600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 03:50:53.715932    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 03:50:53.715932    3696 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 03:50:53.715932    3696 buildroot.go:174] setting up certificates
	I0719 03:50:53.715932    3696 provision.go:84] configureAuth start
	I0719 03:50:53.716479    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:55.878607    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:55.878827    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:55.878961    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:50:58.506063    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:50:58.506342    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:58.506342    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:00.678396    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:00.678396    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:00.678789    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:03.274493    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:03.275498    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:03.275498    3696 provision.go:143] copyHostCerts
	I0719 03:51:03.275498    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0719 03:51:03.276037    3696 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0719 03:51:03.276037    3696 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0719 03:51:03.276654    3696 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 03:51:03.277651    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0719 03:51:03.278183    3696 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0719 03:51:03.278183    3696 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0719 03:51:03.278428    3696 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 03:51:03.279156    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0719 03:51:03.279712    3696 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0719 03:51:03.279842    3696 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0719 03:51:03.280165    3696 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 03:51:03.281113    3696 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-149600 san=[127.0.0.1 172.28.160.82 functional-149600 localhost minikube]
	I0719 03:51:03.689682    3696 provision.go:177] copyRemoteCerts
	I0719 03:51:03.703822    3696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 03:51:03.703822    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:05.944447    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:05.945222    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:05.945222    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:08.655742    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:08.656027    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:08.656027    3696 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 03:51:08.767037    3696 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0631549s)
	I0719 03:51:08.767037    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0719 03:51:08.767037    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 03:51:08.817664    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0719 03:51:08.817664    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 03:51:08.866416    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0719 03:51:08.866625    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 03:51:08.914316    3696 provision.go:87] duration metric: took 15.1982045s to configureAuth
	I0719 03:51:08.914388    3696 buildroot.go:189] setting minikube options for container-runtime
	I0719 03:51:08.914388    3696 config.go:182] Loaded profile config "functional-149600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 03:51:08.915029    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:11.135055    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:11.135661    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:11.135851    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:13.741166    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:13.741166    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:13.746157    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:13.746776    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:13.746776    3696 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 03:51:13.880918    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 03:51:13.880918    3696 buildroot.go:70] root file system type: tmpfs
	I0719 03:51:13.881582    3696 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 03:51:13.881732    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:16.077328    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:16.077328    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:16.078246    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:18.691853    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:18.691853    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:18.698444    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:18.698985    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:18.699205    3696 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 03:51:18.866085    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 03:51:18.866196    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:21.047452    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:21.047757    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:21.047885    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:23.635931    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:23.635931    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:23.641636    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:23.641913    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:23.641913    3696 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 03:51:23.783486    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 03:51:23.783486    3696 machine.go:97] duration metric: took 45.1915583s to provisionDockerMachine
	I0719 03:51:23.783595    3696 start.go:293] postStartSetup for "functional-149600" (driver="hyperv")
	I0719 03:51:23.783595    3696 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 03:51:23.796656    3696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 03:51:23.796656    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:25.981376    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:25.981376    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:25.981376    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:28.598484    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:28.598484    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:28.598544    3696 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 03:51:28.705771    3696 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9090569s)
	I0719 03:51:28.718613    3696 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 03:51:28.725582    3696 command_runner.go:130] > NAME=Buildroot
	I0719 03:51:28.725582    3696 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 03:51:28.725582    3696 command_runner.go:130] > ID=buildroot
	I0719 03:51:28.725582    3696 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 03:51:28.725582    3696 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 03:51:28.725959    3696 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 03:51:28.725959    3696 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 03:51:28.725959    3696 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 03:51:28.727557    3696 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
	I0719 03:51:28.727636    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /etc/ssl/certs/96042.pem
	I0719 03:51:28.728845    3696 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts -> hosts in /etc/test/nested/copy/9604
	I0719 03:51:28.728930    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts -> /etc/test/nested/copy/9604/hosts
	I0719 03:51:28.739078    3696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9604
	I0719 03:51:28.760168    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
	I0719 03:51:28.810185    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts --> /etc/test/nested/copy/9604/hosts (40 bytes)
	I0719 03:51:28.855606    3696 start.go:296] duration metric: took 5.0719507s for postStartSetup
	I0719 03:51:28.855606    3696 fix.go:56] duration metric: took 53.165288s for fixHost
	I0719 03:51:28.855606    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:31.033424    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:31.033992    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:31.033992    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:33.661469    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:33.661469    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:33.666391    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:33.667164    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:33.667164    3696 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 03:51:33.803547    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721361093.813306008
	
	I0719 03:51:33.803653    3696 fix.go:216] guest clock: 1721361093.813306008
	I0719 03:51:33.803653    3696 fix.go:229] Guest: 2024-07-19 03:51:33.813306008 +0000 UTC Remote: 2024-07-19 03:51:28.8556061 +0000 UTC m=+59.006897101 (delta=4.957699908s)
	I0719 03:51:33.803796    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:35.994681    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:35.995703    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:35.995726    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:38.620465    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:38.620535    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:38.625233    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:38.625457    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:38.625457    3696 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721361093
	I0719 03:51:38.774641    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 03:51:33 UTC 2024
	
	I0719 03:51:38.774641    3696 fix.go:236] clock set: Fri Jul 19 03:51:33 UTC 2024
	 (err=<nil>)
	I0719 03:51:38.774738    3696 start.go:83] releasing machines lock for "functional-149600", held for 1m3.0853019s
	I0719 03:51:38.774962    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:40.997351    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:40.997570    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:40.997570    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:43.631283    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:43.631283    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:43.635455    3696 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 03:51:43.635455    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:43.646136    3696 ssh_runner.go:195] Run: cat /version.json
	I0719 03:51:43.646827    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:45.897790    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:45.898865    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:45.898924    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:45.905632    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:45.905632    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:45.906182    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:48.659880    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:48.659880    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:48.661005    3696 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 03:51:48.685815    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:48.685890    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:48.685890    3696 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 03:51:48.760223    3696 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0719 03:51:48.760223    3696 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1247074s)
	W0719 03:51:48.760467    3696 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 03:51:48.777389    3696 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0719 03:51:48.778342    3696 ssh_runner.go:235] Completed: cat /version.json: (5.1321451s)
	I0719 03:51:48.790179    3696 ssh_runner.go:195] Run: systemctl --version
	I0719 03:51:48.799025    3696 command_runner.go:130] > systemd 252 (252)
	I0719 03:51:48.799025    3696 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0719 03:51:48.809673    3696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 03:51:48.817402    3696 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0719 03:51:48.818131    3696 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 03:51:48.831435    3696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 03:51:48.850859    3696 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 03:51:48.850859    3696 start.go:495] detecting cgroup driver to use...
	I0719 03:51:48.851103    3696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0719 03:51:48.877177    3696 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 03:51:48.877177    3696 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 03:51:48.893340    3696 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0719 03:51:48.904541    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 03:51:48.935991    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 03:51:48.954279    3696 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 03:51:48.967927    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 03:51:48.997865    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 03:51:49.026438    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 03:51:49.072524    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 03:51:49.117543    3696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 03:51:49.154251    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 03:51:49.188018    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 03:51:49.222803    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 03:51:49.261427    3696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 03:51:49.282134    3696 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 03:51:49.294367    3696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 03:51:49.330587    3696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:51:49.594056    3696 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 03:51:49.632649    3696 start.go:495] detecting cgroup driver to use...
	I0719 03:51:49.645484    3696 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 03:51:49.668125    3696 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0719 03:51:49.668402    3696 command_runner.go:130] > [Unit]
	I0719 03:51:49.668402    3696 command_runner.go:130] > Description=Docker Application Container Engine
	I0719 03:51:49.668402    3696 command_runner.go:130] > Documentation=https://docs.docker.com
	I0719 03:51:49.668402    3696 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0719 03:51:49.668497    3696 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0719 03:51:49.668497    3696 command_runner.go:130] > StartLimitBurst=3
	I0719 03:51:49.668497    3696 command_runner.go:130] > StartLimitIntervalSec=60
	I0719 03:51:49.668497    3696 command_runner.go:130] > [Service]
	I0719 03:51:49.668497    3696 command_runner.go:130] > Type=notify
	I0719 03:51:49.668497    3696 command_runner.go:130] > Restart=on-failure
	I0719 03:51:49.668497    3696 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0719 03:51:49.668497    3696 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0719 03:51:49.668497    3696 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0719 03:51:49.668497    3696 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0719 03:51:49.668497    3696 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0719 03:51:49.668497    3696 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0719 03:51:49.668497    3696 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0719 03:51:49.668497    3696 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0719 03:51:49.668497    3696 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0719 03:51:49.668497    3696 command_runner.go:130] > ExecStart=
	I0719 03:51:49.668497    3696 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0719 03:51:49.668497    3696 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0719 03:51:49.668497    3696 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0719 03:51:49.668497    3696 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0719 03:51:49.668497    3696 command_runner.go:130] > LimitNOFILE=infinity
	I0719 03:51:49.668497    3696 command_runner.go:130] > LimitNPROC=infinity
	I0719 03:51:49.668497    3696 command_runner.go:130] > LimitCORE=infinity
	I0719 03:51:49.668497    3696 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0719 03:51:49.668497    3696 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0719 03:51:49.668497    3696 command_runner.go:130] > TasksMax=infinity
	I0719 03:51:49.668497    3696 command_runner.go:130] > TimeoutStartSec=0
	I0719 03:51:49.668497    3696 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0719 03:51:49.668497    3696 command_runner.go:130] > Delegate=yes
	I0719 03:51:49.669031    3696 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0719 03:51:49.669031    3696 command_runner.go:130] > KillMode=process
	I0719 03:51:49.669031    3696 command_runner.go:130] > [Install]
	I0719 03:51:49.669031    3696 command_runner.go:130] > WantedBy=multi-user.target
	I0719 03:51:49.680959    3696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 03:51:49.714100    3696 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 03:51:49.772216    3696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 03:51:49.806868    3696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 03:51:49.828840    3696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 03:51:49.861009    3696 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0719 03:51:49.874179    3696 ssh_runner.go:195] Run: which cri-dockerd
	I0719 03:51:49.879587    3696 command_runner.go:130] > /usr/bin/cri-dockerd
	I0719 03:51:49.890138    3696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 03:51:49.907472    3696 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 03:51:49.956150    3696 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 03:51:50.235400    3696 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 03:51:50.503397    3696 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 03:51:50.503594    3696 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 03:51:50.548434    3696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:51:50.826918    3696 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 03:53:02.223808    3696 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0719 03:53:02.223949    3696 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0719 03:53:02.224012    3696 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3961883s)
	I0719 03:53:02.236953    3696 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0719 03:53:02.270395    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	I0719 03:53:02.270395    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.531914350Z" level=info msg="Starting up"
	I0719 03:53:02.270707    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.534422132Z" level=info msg="containerd not running, starting managed containerd"
	I0719 03:53:02.270707    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.535803677Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=684
	I0719 03:53:02.270775    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.567717825Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0719 03:53:02.270775    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594617108Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0719 03:53:02.270775    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594655809Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0719 03:53:02.270913    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594718511Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0719 03:53:02.270913    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594736112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.270913    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594817914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.270991    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595026521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.270991    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595269429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.271066    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595407134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271066    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595431535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.271066    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595445135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271174    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595540038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271174    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595881749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271283    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598812246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.271283    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598917149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271348    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599162457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.271348    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599284261Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0719 03:53:02.271420    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599462867Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0719 03:53:02.271420    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599605372Z" level=info msg="metadata content store policy set" policy=shared
	I0719 03:53:02.271420    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625338316Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0719 03:53:02.271510    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625549423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0719 03:53:02.271510    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625577124Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0719 03:53:02.271510    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625596425Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0719 03:53:02.271510    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625614725Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0719 03:53:02.271580    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625734329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0719 03:53:02.271580    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626111642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.271580    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626552556Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.271651    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626708661Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0719 03:53:02.271651    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626731962Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0719 03:53:02.271722    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626749163Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271722    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626764763Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271722    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626779864Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271793    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626807165Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271793    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626826665Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271793    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626842566Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271879    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626857266Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271879    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626871767Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271983    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626901168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.271983    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626925168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.271983    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626942469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626958269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626972470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626986970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627018171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627050773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627067473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272189    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627087974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272189    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627102874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272189    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627118075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272189    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627133475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272278    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627151576Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0719 03:53:02.272278    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627179977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272278    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627207478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272385    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627223378Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0719 03:53:02.272385    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627308681Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0719 03:53:02.272385    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627497987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0719 03:53:02.272456    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627603491Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0719 03:53:02.272456    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627628491Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0719 03:53:02.272456    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627642192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272527    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627659693Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0719 03:53:02.272527    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627677693Z" level=info msg="NRI interface is disabled by configuration."
	I0719 03:53:02.272527    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628139708Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0719 03:53:02.272598    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628464419Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0719 03:53:02.272598    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628586223Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0719 03:53:02.272598    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628648825Z" level=info msg="containerd successfully booted in 0.062295s"
	I0719 03:53:02.272668    3696 command_runner.go:130] > Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.605880874Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0719 03:53:02.272668    3696 command_runner.go:130] > Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.640734047Z" level=info msg="Loading containers: start."
	I0719 03:53:02.272741    3696 command_runner.go:130] > Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.813575066Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0719 03:53:02.272741    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.031273218Z" level=info msg="Loading containers: done."
	I0719 03:53:02.272741    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052569890Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052711603Z" level=info msg="Daemon has completed initialization"
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174428772Z" level=info msg="API listen on /var/run/docker.sock"
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 systemd[1]: Started Docker Application Container Engine.
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174659093Z" level=info msg="API listen on [::]:2376"
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	I0719 03:53:02.272906    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.327916124Z" level=info msg="Processing signal 'terminated'"
	I0719 03:53:02.272906    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.330803748Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0719 03:53:02.272906    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332114659Z" level=info msg="Daemon shutdown complete"
	I0719 03:53:02.272978    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332413462Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0719 03:53:02.273055    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332761765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0719 03:53:02.273055    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	I0719 03:53:02.273055    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	I0719 03:53:02.273055    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	I0719 03:53:02.273128    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.396667332Z" level=info msg="Starting up"
	I0719 03:53:02.273128    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.397798042Z" level=info msg="containerd not running, starting managed containerd"
	I0719 03:53:02.273201    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.402462181Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1096
	I0719 03:53:02.273201    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.432470534Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0719 03:53:02.273273    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459514962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0719 03:53:02.273273    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459615563Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0719 03:53:02.273273    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459667563Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0719 03:53:02.273345    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459682563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273345    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459912965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273418    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459936465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273418    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460088967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273418    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460343269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273495    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460374469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273495    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460396969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273495    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460425770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273569    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460819273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273569    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463853798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273642    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463983400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273642    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464200501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464295002Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464331702Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464352803Z" level=info msg="metadata content store policy set" policy=shared
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464795906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464850207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464884207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0719 03:53:02.273929    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464929707Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0719 03:53:02.273929    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464948008Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0719 03:53:02.273929    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465078809Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0719 03:53:02.273929    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465467012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.274022    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465770315Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.274055    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465863515Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465884716Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465898416Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465911416Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465922816Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465936016Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465964116Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465979716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465991216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466002317Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466032417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466048417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466060817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466073817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466093917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466108217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466120718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466132618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466145718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466159818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466170818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466182018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466193718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466207818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466226918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466239919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466250719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466362320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466382920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466470120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0719 03:53:02.274821    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466490821Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0719 03:53:02.274821    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466502121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274898    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466521321Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0719 03:53:02.274898    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466787523Z" level=info msg="NRI interface is disabled by configuration."
	I0719 03:53:02.274898    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467170726Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0719 03:53:02.274898    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467422729Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0719 03:53:02.274989    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467502129Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0719 03:53:02.274989    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467596330Z" level=info msg="containerd successfully booted in 0.035978s"
	I0719 03:53:02.274989    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.446816884Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0719 03:53:02.275065    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.479266357Z" level=info msg="Loading containers: start."
	I0719 03:53:02.275065    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.611087768Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0719 03:53:02.275139    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.727699751Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0719 03:53:02.275139    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.826817487Z" level=info msg="Loading containers: done."
	I0719 03:53:02.275139    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851788197Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0719 03:53:02.275139    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851961999Z" level=info msg="Daemon has completed initialization"
	I0719 03:53:02.275211    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902179022Z" level=info msg="API listen on /var/run/docker.sock"
	I0719 03:53:02.275211    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 systemd[1]: Started Docker Application Container Engine.
	I0719 03:53:02.275211    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902385724Z" level=info msg="API listen on [::]:2376"
	I0719 03:53:02.275286    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.464303420Z" level=info msg="Processing signal 'terminated'"
	I0719 03:53:02.275286    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466178836Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0719 03:53:02.275286    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466444238Z" level=info msg="Daemon shutdown complete"
	I0719 03:53:02.275358    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466617340Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0719 03:53:02.275358    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466645140Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0719 03:53:02.275358    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	I0719 03:53:02.275430    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	I0719 03:53:02.275430    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	I0719 03:53:02.275430    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	I0719 03:53:02.275557    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.526170370Z" level=info msg="Starting up"
	I0719 03:53:02.275580    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.527875185Z" level=info msg="containerd not running, starting managed containerd"
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.529085595Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1449
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.561806771Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.588986100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589119201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589175201Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589189602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589217102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589231002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589372603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589466304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589487004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589498304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589522204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589693506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.592940233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593046334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593164935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593288836Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593325336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593343737Z" level=info msg="metadata content store policy set" policy=shared
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593655039Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593711040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593798040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593840041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593855841Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593915841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594246644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594583947Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594609647Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594625347Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594640447Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276335    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594659648Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276335    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594674648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594689748Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594715248Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594831949Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594864649Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594894750Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594912750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594927150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594938550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594949850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594961050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594988850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594999351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595010451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595022151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595034451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595044251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595054151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595064451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595080551Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595100051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595112651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595122752Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595253153Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595360854Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595377754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595405554Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595414854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595426254Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595435454Z" level=info msg="NRI interface is disabled by configuration."
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595711057Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595836558Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595937958Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595991659Z" level=info msg="containerd successfully booted in 0.035148s"
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:45 functional-149600 dockerd[1443]: time="2024-07-19T03:49:45.571450281Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0719 03:53:02.277129    3696 command_runner.go:130] > Jul 19 03:49:48 functional-149600 dockerd[1443]: time="2024-07-19T03:49:48.883728000Z" level=info msg="Loading containers: start."
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.006401134Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.127192752Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.218929925Z" level=info msg="Loading containers: done."
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249486583Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249683984Z" level=info msg="Daemon has completed initialization"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299922608Z" level=info msg="API listen on /var/run/docker.sock"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 systemd[1]: Started Docker Application Container Engine.
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299991408Z" level=info msg="API listen on [::]:2376"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812314634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812468840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.813783594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.814181811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.823808405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826750026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826767127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826866331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899025089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899127893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899277199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899669815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918254477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918562790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.920597373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.921124695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387701734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387801838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387829539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387963045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436646441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436931752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437090859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437275166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539345743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539671255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.540445185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.541481126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550468276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550879792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.551210305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.555850986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.238972986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239834399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239916700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.240127804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589855933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589966535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589987335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.590436642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002502056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278448    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002639758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278487    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002654558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278487    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.003059965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278487    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053221935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053490639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053805144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.054875960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794781741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794871142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794886242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794980442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806139221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806918426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807029827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807551631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.625163713Z" level=info msg="ignoring event" container=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.629961233Z" level=info msg="shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631114094Z" level=warning msg="cleaning up after shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631402359Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.674442159Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:50:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.838943371Z" level=info msg="ignoring event" container=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.839886257Z" level=info msg="shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840028839Z" level=warning msg="cleaning up after shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840046637Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303237678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303415569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.304773802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.305273178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593684961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593784156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593803755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.279165    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593913350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:50 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:50 functional-149600 dockerd[1443]: time="2024-07-19T03:51:50.861615472Z" level=info msg="Processing signal 'terminated'"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079285636Z" level=info msg="shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079436436Z" level=warning msg="cleaning up after shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079453436Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.080991335Z" level=info msg="ignoring event" container=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.090996234Z" level=info msg="ignoring event" container=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091838634Z" level=info msg="shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091953834Z" level=warning msg="cleaning up after shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091968234Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.112734230Z" level=info msg="ignoring event" container=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116127330Z" level=info msg="shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116200030Z" level=warning msg="cleaning up after shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116210930Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116537230Z" level=info msg="shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116585530Z" level=warning msg="cleaning up after shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116614030Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116823530Z" level=info msg="ignoring event" container=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116849530Z" level=info msg="ignoring event" container=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116946330Z" level=info msg="ignoring event" container=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116988930Z" level=info msg="ignoring event" container=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122714429Z" level=info msg="shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122848129Z" level=warning msg="cleaning up after shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122861729Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128254128Z" level=info msg="shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128388728Z" level=warning msg="cleaning up after shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128443128Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131550327Z" level=info msg="shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131620327Z" level=warning msg="cleaning up after shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131665527Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148015624Z" level=info msg="shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148155124Z" level=warning msg="cleaning up after shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148209624Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182402919Z" level=info msg="shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182503119Z" level=warning msg="cleaning up after shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182514319Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183465819Z" level=info msg="shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183548319Z" level=warning msg="cleaning up after shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183560019Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185575018Z" level=info msg="ignoring event" container=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185722318Z" level=info msg="ignoring event" container=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185758518Z" level=info msg="ignoring event" container=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185811118Z" level=info msg="ignoring event" container=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185852418Z" level=info msg="ignoring event" container=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186041918Z" level=info msg="shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186095318Z" level=warning msg="cleaning up after shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186139118Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187552418Z" level=info msg="shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187672518Z" level=warning msg="cleaning up after shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187687018Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987746429Z" level=info msg="shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987797629Z" level=warning msg="cleaning up after shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987859329Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:55 functional-149600 dockerd[1443]: time="2024-07-19T03:51:55.988258129Z" level=info msg="ignoring event" container=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:56 functional-149600 dockerd[1449]: time="2024-07-19T03:51:56.011512525Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:51:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.013086308Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.070091705Z" level=info msg="ignoring event" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.070778533Z" level=info msg="shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071068387Z" level=warning msg="cleaning up after shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071124597Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.147257850Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148746827Z" level=info msg="Daemon shutdown complete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148999274Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.149087991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Consumed 5.480s CPU time.
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 dockerd[4315]: time="2024-07-19T03:52:02.207309394Z" level=info msg="Starting up"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:53:02 functional-149600 dockerd[4315]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:53:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	I0719 03:53:02.309205    3696 out.go:177] 
	W0719 03:53:02.311207    3696 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 03:48:58 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.531914350Z" level=info msg="Starting up"
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.534422132Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.535803677Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=684
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.567717825Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594617108Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594655809Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594718511Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594736112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594817914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595026521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595269429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595407134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595431535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595445135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595540038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595881749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598812246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598917149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599162457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599284261Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599462867Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599605372Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625338316Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625549423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625577124Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625596425Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625614725Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625734329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626111642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626552556Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626708661Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626731962Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626749163Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626764763Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626779864Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626807165Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626826665Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626842566Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626857266Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626871767Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626901168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626925168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626942469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626958269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626972470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626986970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627018171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627050773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627067473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627087974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627102874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627118075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627133475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627151576Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627179977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627207478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627223378Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627308681Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627497987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627603491Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627628491Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627642192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627659693Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627677693Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628139708Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628464419Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628586223Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628648825Z" level=info msg="containerd successfully booted in 0.062295s"
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.605880874Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.640734047Z" level=info msg="Loading containers: start."
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.813575066Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.031273218Z" level=info msg="Loading containers: done."
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052569890Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052711603Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174428772Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:00 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174659093Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:31 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.327916124Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.330803748Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332114659Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332413462Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332761765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:49:32 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:49:32 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:49:32 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.396667332Z" level=info msg="Starting up"
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.397798042Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.402462181Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1096
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.432470534Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459514962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459615563Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459667563Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459682563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459912965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459936465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460088967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460343269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460374469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460396969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460425770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460819273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463853798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463983400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464200501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464295002Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464331702Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464352803Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464795906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464850207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464884207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464929707Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464948008Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465078809Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465467012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465770315Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465863515Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465884716Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465898416Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465911416Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465922816Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465936016Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465964116Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465979716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465991216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466002317Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466032417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466048417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466060817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466073817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466093917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466108217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466120718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466132618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466145718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466159818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466170818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466182018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466193718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466207818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466226918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466239919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466250719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466362320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466382920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466470120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466490821Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466502121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466521321Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466787523Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467170726Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467422729Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467502129Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467596330Z" level=info msg="containerd successfully booted in 0.035978s"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.446816884Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.479266357Z" level=info msg="Loading containers: start."
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.611087768Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.727699751Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.826817487Z" level=info msg="Loading containers: done."
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851788197Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851961999Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902179022Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:33 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902385724Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.464303420Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466178836Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466444238Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466617340Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466645140Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:49:43 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:49:44 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:49:44 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:49:44 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.526170370Z" level=info msg="Starting up"
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.527875185Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.529085595Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1449
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.561806771Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.588986100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589119201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589175201Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589189602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589217102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589231002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589372603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589466304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589487004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589498304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589522204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589693506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.592940233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593046334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593164935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593288836Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593325336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593343737Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593655039Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593711040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593798040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593840041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593855841Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593915841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594246644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594583947Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594609647Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594625347Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594640447Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594659648Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594674648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594689748Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594715248Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594831949Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594864649Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594894750Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594912750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594927150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594938550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594949850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594961050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594988850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594999351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595010451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595022151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595034451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595044251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595054151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595064451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595080551Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595100051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595112651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595122752Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595253153Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595360854Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595377754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595405554Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595414854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595426254Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595435454Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595711057Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595836558Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595937958Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595991659Z" level=info msg="containerd successfully booted in 0.035148s"
	Jul 19 03:49:45 functional-149600 dockerd[1443]: time="2024-07-19T03:49:45.571450281Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:49:48 functional-149600 dockerd[1443]: time="2024-07-19T03:49:48.883728000Z" level=info msg="Loading containers: start."
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.006401134Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.127192752Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.218929925Z" level=info msg="Loading containers: done."
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249486583Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249683984Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299922608Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:49 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299991408Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812314634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812468840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.813783594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.814181811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.823808405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826750026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826767127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826866331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899025089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899127893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899277199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899669815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918254477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918562790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.920597373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.921124695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387701734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387801838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387829539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387963045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436646441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436931752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437090859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437275166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539345743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539671255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.540445185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.541481126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550468276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550879792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.551210305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.555850986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.238972986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239834399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239916700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.240127804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589855933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589966535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589987335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.590436642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002502056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002639758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002654558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.003059965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053221935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053490639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053805144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.054875960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794781741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794871142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794886242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794980442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806139221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806918426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807029827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807551631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.625163713Z" level=info msg="ignoring event" container=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.629961233Z" level=info msg="shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631114094Z" level=warning msg="cleaning up after shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631402359Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.674442159Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:50:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.838943371Z" level=info msg="ignoring event" container=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.839886257Z" level=info msg="shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840028839Z" level=warning msg="cleaning up after shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840046637Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303237678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303415569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.304773802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.305273178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593684961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593784156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593803755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593913350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:51:50 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:51:50 functional-149600 dockerd[1443]: time="2024-07-19T03:51:50.861615472Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079285636Z" level=info msg="shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079436436Z" level=warning msg="cleaning up after shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079453436Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.080991335Z" level=info msg="ignoring event" container=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.090996234Z" level=info msg="ignoring event" container=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091838634Z" level=info msg="shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091953834Z" level=warning msg="cleaning up after shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091968234Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.112734230Z" level=info msg="ignoring event" container=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116127330Z" level=info msg="shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116200030Z" level=warning msg="cleaning up after shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116210930Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116537230Z" level=info msg="shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116585530Z" level=warning msg="cleaning up after shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116614030Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116823530Z" level=info msg="ignoring event" container=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116849530Z" level=info msg="ignoring event" container=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116946330Z" level=info msg="ignoring event" container=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116988930Z" level=info msg="ignoring event" container=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122714429Z" level=info msg="shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122848129Z" level=warning msg="cleaning up after shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122861729Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128254128Z" level=info msg="shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128388728Z" level=warning msg="cleaning up after shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128443128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131550327Z" level=info msg="shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131620327Z" level=warning msg="cleaning up after shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131665527Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148015624Z" level=info msg="shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148155124Z" level=warning msg="cleaning up after shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148209624Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182402919Z" level=info msg="shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182503119Z" level=warning msg="cleaning up after shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182514319Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183465819Z" level=info msg="shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183548319Z" level=warning msg="cleaning up after shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183560019Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185575018Z" level=info msg="ignoring event" container=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185722318Z" level=info msg="ignoring event" container=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185758518Z" level=info msg="ignoring event" container=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185811118Z" level=info msg="ignoring event" container=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185852418Z" level=info msg="ignoring event" container=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186041918Z" level=info msg="shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186095318Z" level=warning msg="cleaning up after shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186139118Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187552418Z" level=info msg="shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187672518Z" level=warning msg="cleaning up after shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187687018Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987746429Z" level=info msg="shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987797629Z" level=warning msg="cleaning up after shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987859329Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1443]: time="2024-07-19T03:51:55.988258129Z" level=info msg="ignoring event" container=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:56 functional-149600 dockerd[1449]: time="2024-07-19T03:51:56.011512525Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:51:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.013086308Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.070091705Z" level=info msg="ignoring event" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.070778533Z" level=info msg="shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071068387Z" level=warning msg="cleaning up after shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071124597Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.147257850Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148746827Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148999274Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.149087991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:52:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Consumed 5.480s CPU time.
	Jul 19 03:52:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:52:02 functional-149600 dockerd[4315]: time="2024-07-19T03:52:02.207309394Z" level=info msg="Starting up"
	Jul 19 03:53:02 functional-149600 dockerd[4315]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:53:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0719 03:53:02.313026    3696 out.go:239] * 
	W0719 03:53:02.314501    3696 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 03:53:02.320481    3696 out.go:177] 
	
	
	==> Docker <==
	Jul 19 03:57:03 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:57:03Z" level=error msg="error getting RW layer size for container ID '905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:57:03 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:57:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2'"
	Jul 19 03:57:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Jul 19 03:57:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:57:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:57:03 functional-149600 dockerd[5584]: time="2024-07-19T03:57:03.385021046Z" level=info msg="Starting up"
	Jul 19 03:58:03 functional-149600 dockerd[5584]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:58:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:58:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:58:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:58:03 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:58:03Z" level=error msg="error getting RW layer size for container ID '4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:58:03 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:58:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26'"
	Jul 19 03:58:03 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:58:03Z" level=error msg="error getting RW layer size for container ID '86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:58:03 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:58:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d'"
	Jul 19 03:58:03 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:58:03Z" level=error msg="error getting RW layer size for container ID '896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:58:03 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:58:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f'"
	Jul 19 03:58:03 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:58:03Z" level=error msg="error getting RW layer size for container ID 'db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:58:03 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:58:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703'"
	Jul 19 03:58:03 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:58:03Z" level=error msg="error getting RW layer size for container ID '905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:58:03 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:58:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2'"
	Jul 19 03:58:03 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:58:03Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 19 03:58:03 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:58:03Z" level=error msg="error getting RW layer size for container ID '2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:58:03 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:58:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f'"
	Jul 19 03:58:03 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:58:03Z" level=error msg="error getting RW layer size for container ID '73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:58:03 functional-149600 cri-dockerd[1341]: time="2024-07-19T03:58:03Z" level=error msg="Set backoffDuration to : 1m0s for container ID '73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0'"
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-19T03:58:05Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.525666] systemd-fstab-generator[1054]: Ignoring "noauto" option for root device
	[  +0.198552] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	[  +0.233106] systemd-fstab-generator[1081]: Ignoring "noauto" option for root device
	[  +2.882289] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.217497] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +0.196783] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[  +0.258312] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +8.589795] systemd-fstab-generator[1435]: Ignoring "noauto" option for root device
	[  +0.109572] kauditd_printk_skb: 202 callbacks suppressed
	[  +5.479934] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.746047] systemd-fstab-generator[1680]: Ignoring "noauto" option for root device
	[  +6.463791] systemd-fstab-generator[1887]: Ignoring "noauto" option for root device
	[  +0.101637] kauditd_printk_skb: 48 callbacks suppressed
	[Jul19 03:50] systemd-fstab-generator[2289]: Ignoring "noauto" option for root device
	[  +0.137056] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.913934] systemd-fstab-generator[2516]: Ignoring "noauto" option for root device
	[  +0.188713] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.060318] hrtimer: interrupt took 3867561 ns
	[  +7.580998] kauditd_printk_skb: 90 callbacks suppressed
	[Jul19 03:51] systemd-fstab-generator[3837]: Ignoring "noauto" option for root device
	[  +0.149840] kauditd_printk_skb: 10 callbacks suppressed
	[  +0.466272] systemd-fstab-generator[3873]: Ignoring "noauto" option for root device
	[  +0.296379] systemd-fstab-generator[3899]: Ignoring "noauto" option for root device
	[  +0.316733] systemd-fstab-generator[3913]: Ignoring "noauto" option for root device
	[  +5.318922] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 03:59:03 up 11 min,  0 users,  load average: 0.00, 0.09, 0.09
	Linux functional-149600 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 19 03:58:55 functional-149600 kubelet[2296]: E0719 03:58:55.693621    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?resourceVersion=0&timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 03:58:55 functional-149600 kubelet[2296]: E0719 03:58:55.694663    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 03:58:55 functional-149600 kubelet[2296]: E0719 03:58:55.695871    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 03:58:55 functional-149600 kubelet[2296]: E0719 03:58:55.697012    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 03:58:55 functional-149600 kubelet[2296]: E0719 03:58:55.697860    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 03:58:55 functional-149600 kubelet[2296]: E0719 03:58:55.697968    2296 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 19 03:58:57 functional-149600 kubelet[2296]: E0719 03:58:57.168447    2296 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-149600.17e380d0653c8da3\": dial tcp 172.28.160.82:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-149600.17e380d0653c8da3  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-149600,UID:73b077f74b512a0b97280a590f1f1546,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:functional-149600,},FirstTimestamp:2024-07-19 03:51:55.125681571 +0000 UTC m=+110.178930336,LastTimestamp:2024-07-19 03:51:57.132190023 +0000 UTC m=+112.185438688,Co
unt:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-149600,}"
	Jul 19 03:58:58 functional-149600 kubelet[2296]: E0719 03:58:58.607114    2296 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 7m8.231561623s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jul 19 03:58:59 functional-149600 kubelet[2296]: E0719 03:58:59.814080    2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused" interval="7s"
	Jul 19 03:59:03 functional-149600 kubelet[2296]: E0719 03:59:03.608181    2296 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 7m13.232619232s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jul 19 03:59:03 functional-149600 kubelet[2296]: E0719 03:59:03.612983    2296 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:59:03 functional-149600 kubelet[2296]: E0719 03:59:03.613104    2296 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:59:03 functional-149600 kubelet[2296]: E0719 03:59:03.613221    2296 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 19 03:59:03 functional-149600 kubelet[2296]: E0719 03:59:03.613963    2296 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:59:03 functional-149600 kubelet[2296]: I0719 03:59:03.614515    2296 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:59:03 functional-149600 kubelet[2296]: E0719 03:59:03.616371    2296 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 19 03:59:03 functional-149600 kubelet[2296]: E0719 03:59:03.616575    2296 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:59:03 functional-149600 kubelet[2296]: E0719 03:59:03.616979    2296 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:59:03 functional-149600 kubelet[2296]: E0719 03:59:03.617093    2296 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 19 03:59:03 functional-149600 kubelet[2296]: E0719 03:59:03.617125    2296 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:59:03 functional-149600 kubelet[2296]: E0719 03:59:03.617269    2296 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 19 03:59:03 functional-149600 kubelet[2296]: E0719 03:59:03.617373    2296 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 03:59:03 functional-149600 kubelet[2296]: E0719 03:59:03.619589    2296 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 19 03:59:03 functional-149600 kubelet[2296]: E0719 03:59:03.619759    2296 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 19 03:59:03 functional-149600 kubelet[2296]: E0719 03:59:03.619942    2296 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:56:38.376686    6388 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0719 03:57:03.149692    6388 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 03:57:03.185080    6388 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 03:57:03.215061    6388 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 03:57:03.248747    6388 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 03:57:03.278096    6388 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 03:57:03.308213    6388 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.46/containers/json?all=1&filters=%7B%22name%22%3A%7B%22k8s_kube-controller-manager%22%3Atrue%7D%7D": dial unix /var/run/docker.sock: connect: permission denied
	E0719 03:58:03.407466    6388 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 03:58:03.439206    6388 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-149600 -n functional-149600
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-149600 -n functional-149600: exit status 2 (12.584803s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:59:04.432337    2132 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-149600" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (181.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (11.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-149600 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-149600 ssh sudo crictl images: exit status 1 (11.5746033s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:06:06.594451    3840 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1122: failed to get images by "out/minikube-windows-amd64.exe -p functional-149600 ssh sudo crictl images" ssh exit status 1
functional_test.go:1126: expected sha for pause:3.3 "0184c1613d929" to be in the output but got *
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:06:06.594451    3840 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr ***
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (11.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (179.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-149600 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-149600 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 1 (47.5823697s)

                                                
                                                
-- stdout --
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:06:18.167735    9668 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1146: failed to manually delete image "out/minikube-windows-amd64.exe -p functional-149600 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 1
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-149600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-149600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (11.4750342s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:07:05.750127   12948 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-149600 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-149600 cache reload: (1m49.0726638s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-149600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-149600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (11.5175768s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:09:06.295795    5684 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1161: expected "out/minikube-windows-amd64.exe -p functional-149600 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 1
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (179.65s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (180.75s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-149600 kubectl -- --context functional-149600 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-149600 kubectl -- --context functional-149600 get pods: exit status 1 (10.7115282s)

                                                
                                                
** stderr ** 
	W0719 04:12:19.912982    3044 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0719 04:12:22.293639   12408 memcache.go:265] couldn't get current server API group list: Get "https://172.28.160.82:8441/api?timeout=32s": dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.
	E0719 04:12:24.384859   12408 memcache.go:265] couldn't get current server API group list: Get "https://172.28.160.82:8441/api?timeout=32s": dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.
	E0719 04:12:26.421254   12408 memcache.go:265] couldn't get current server API group list: Get "https://172.28.160.82:8441/api?timeout=32s": dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.
	E0719 04:12:28.446493   12408 memcache.go:265] couldn't get current server API group list: Get "https://172.28.160.82:8441/api?timeout=32s": dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.
	E0719 04:12:30.494630   12408 memcache.go:265] couldn't get current server API group list: Get "https://172.28.160.82:8441/api?timeout=32s": dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.
	Unable to connect to the server: dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-windows-amd64.exe -p functional-149600 kubectl -- --context functional-149600 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-149600 -n functional-149600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-149600 -n functional-149600: exit status 2 (12.1890232s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:12:30.632215    2504 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-149600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-149600 logs -n 25: (2m25.1547387s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-907600 --log_dir                                     | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-907600 --log_dir                                     | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-907600 --log_dir                                     | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-907600 --log_dir                                     | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                     | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                     | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                     | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-907600                                            | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	| start   | -p functional-149600                                        | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:50 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-149600                                        | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:50 UTC |                     |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache add                                 | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:59 UTC | 19 Jul 24 04:01 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache add                                 | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:01 UTC | 19 Jul 24 04:03 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache add                                 | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:03 UTC | 19 Jul 24 04:05 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache add                                 | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:05 UTC | 19 Jul 24 04:06 UTC |
	|         | minikube-local-cache-test:functional-149600                 |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache delete                              | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC | 19 Jul 24 04:06 UTC |
	|         | minikube-local-cache-test:functional-149600                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC | 19 Jul 24 04:06 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC | 19 Jul 24 04:06 UTC |
	| ssh     | functional-149600 ssh sudo                                  | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC |                     |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-149600                                           | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC |                     |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-149600 ssh                                       | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:07 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache reload                              | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:07 UTC | 19 Jul 24 04:09 UTC |
	| ssh     | functional-149600 ssh                                       | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:09 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:09 UTC | 19 Jul 24 04:09 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:09 UTC | 19 Jul 24 04:09 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-149600 kubectl --                                | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:12 UTC |                     |
	|         | --context functional-149600                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 03:50:30
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 03:50:30.014967    3696 out.go:291] Setting OutFile to fd 736 ...
	I0719 03:50:30.015716    3696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:50:30.015716    3696 out.go:304] Setting ErrFile to fd 920...
	I0719 03:50:30.015716    3696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:50:30.039559    3696 out.go:298] Setting JSON to false
	I0719 03:50:30.043125    3696 start.go:129] hostinfo: {"hostname":"minikube6","uptime":20056,"bootTime":1721340973,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0719 03:50:30.043125    3696 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 03:50:30.049078    3696 out.go:177] * [functional-149600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 03:50:30.054622    3696 notify.go:220] Checking for updates...
	I0719 03:50:30.058333    3696 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 03:50:30.061118    3696 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 03:50:30.064121    3696 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0719 03:50:30.066177    3696 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 03:50:30.069184    3696 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 03:50:30.074042    3696 config.go:182] Loaded profile config "functional-149600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 03:50:30.074369    3696 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:50:35.621705    3696 out.go:177] * Using the hyperv driver based on existing profile
	I0719 03:50:35.625671    3696 start.go:297] selected driver: hyperv
	I0719 03:50:35.625671    3696 start.go:901] validating driver "hyperv" against &{Name:functional-149600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-149600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.160.82 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:50:35.625671    3696 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 03:50:35.676160    3696 cni.go:84] Creating CNI manager for ""
	I0719 03:50:35.676274    3696 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 03:50:35.676342    3696 start.go:340] cluster config:
	{Name:functional-149600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-149600 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.160.82 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:50:35.676342    3696 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 03:50:35.683357    3696 out.go:177] * Starting "functional-149600" primary control-plane node in "functional-149600" cluster
	I0719 03:50:35.686075    3696 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 03:50:35.686075    3696 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 03:50:35.686075    3696 cache.go:56] Caching tarball of preloaded images
	I0719 03:50:35.686075    3696 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 03:50:35.686075    3696 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 03:50:35.686926    3696 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-149600\config.json ...
	I0719 03:50:35.688692    3696 start.go:360] acquireMachinesLock for functional-149600: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 03:50:35.688692    3696 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-149600"
	I0719 03:50:35.689690    3696 start.go:96] Skipping create...Using existing machine configuration
	I0719 03:50:35.689690    3696 fix.go:54] fixHost starting: 
	I0719 03:50:35.689690    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:38.581698    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:38.581801    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:38.581801    3696 fix.go:112] recreateIfNeeded on functional-149600: state=Running err=<nil>
	W0719 03:50:38.581801    3696 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 03:50:38.589005    3696 out.go:177] * Updating the running hyperv "functional-149600" VM ...
	I0719 03:50:38.591394    3696 machine.go:94] provisionDockerMachine start ...
	I0719 03:50:38.591394    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:40.863423    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:40.863423    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:40.863553    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:50:43.589830    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:50:43.589830    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:43.596398    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:50:43.597572    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:50:43.597572    3696 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 03:50:43.733324    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-149600
	
	I0719 03:50:43.733461    3696 buildroot.go:166] provisioning hostname "functional-149600"
	I0719 03:50:43.733530    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:46.004354    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:46.004354    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:46.004354    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:50:48.635705    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:50:48.635705    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:48.641943    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:50:48.642699    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:50:48.642699    3696 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-149600 && echo "functional-149600" | sudo tee /etc/hostname
	I0719 03:50:48.808147    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-149600
	
	I0719 03:50:48.808147    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:50.983670    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:50.983670    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:50.983670    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:50:53.564554    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:50:53.564554    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:53.570500    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:50:53.571029    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:50:53.571029    3696 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-149600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-149600/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-149600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 03:50:53.715932    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 03:50:53.715932    3696 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 03:50:53.715932    3696 buildroot.go:174] setting up certificates
	I0719 03:50:53.715932    3696 provision.go:84] configureAuth start
	I0719 03:50:53.716479    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:55.878607    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:55.878827    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:55.878961    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:50:58.506063    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:50:58.506342    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:58.506342    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:00.678396    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:00.678396    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:00.678789    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:03.274493    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:03.275498    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:03.275498    3696 provision.go:143] copyHostCerts
	I0719 03:51:03.275498    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0719 03:51:03.276037    3696 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0719 03:51:03.276037    3696 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0719 03:51:03.276654    3696 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 03:51:03.277651    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0719 03:51:03.278183    3696 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0719 03:51:03.278183    3696 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0719 03:51:03.278428    3696 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 03:51:03.279156    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0719 03:51:03.279712    3696 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0719 03:51:03.279842    3696 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0719 03:51:03.280165    3696 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 03:51:03.281113    3696 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-149600 san=[127.0.0.1 172.28.160.82 functional-149600 localhost minikube]
	I0719 03:51:03.689682    3696 provision.go:177] copyRemoteCerts
	I0719 03:51:03.703822    3696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 03:51:03.703822    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:05.944447    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:05.945222    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:05.945222    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:08.655742    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:08.656027    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:08.656027    3696 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 03:51:08.767037    3696 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0631549s)
	I0719 03:51:08.767037    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0719 03:51:08.767037    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 03:51:08.817664    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0719 03:51:08.817664    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 03:51:08.866416    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0719 03:51:08.866625    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 03:51:08.914316    3696 provision.go:87] duration metric: took 15.1982045s to configureAuth
	I0719 03:51:08.914388    3696 buildroot.go:189] setting minikube options for container-runtime
	I0719 03:51:08.914388    3696 config.go:182] Loaded profile config "functional-149600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 03:51:08.915029    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:11.135055    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:11.135661    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:11.135851    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:13.741166    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:13.741166    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:13.746157    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:13.746776    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:13.746776    3696 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 03:51:13.880918    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 03:51:13.880918    3696 buildroot.go:70] root file system type: tmpfs
	I0719 03:51:13.881582    3696 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 03:51:13.881732    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:16.077328    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:16.077328    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:16.078246    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:18.691853    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:18.691853    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:18.698444    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:18.698985    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:18.699205    3696 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 03:51:18.866085    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 03:51:18.866196    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:21.047452    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:21.047757    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:21.047885    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:23.635931    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:23.635931    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:23.641636    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:23.641913    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:23.641913    3696 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 03:51:23.783486    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 03:51:23.783486    3696 machine.go:97] duration metric: took 45.1915583s to provisionDockerMachine
	I0719 03:51:23.783595    3696 start.go:293] postStartSetup for "functional-149600" (driver="hyperv")
	I0719 03:51:23.783595    3696 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 03:51:23.796656    3696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 03:51:23.796656    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:25.981376    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:25.981376    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:25.981376    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:28.598484    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:28.598484    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:28.598544    3696 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 03:51:28.705771    3696 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9090569s)
	I0719 03:51:28.718613    3696 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 03:51:28.725582    3696 command_runner.go:130] > NAME=Buildroot
	I0719 03:51:28.725582    3696 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 03:51:28.725582    3696 command_runner.go:130] > ID=buildroot
	I0719 03:51:28.725582    3696 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 03:51:28.725582    3696 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 03:51:28.725959    3696 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 03:51:28.725959    3696 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 03:51:28.725959    3696 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 03:51:28.727557    3696 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
	I0719 03:51:28.727636    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /etc/ssl/certs/96042.pem
	I0719 03:51:28.728845    3696 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts -> hosts in /etc/test/nested/copy/9604
	I0719 03:51:28.728930    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts -> /etc/test/nested/copy/9604/hosts
	I0719 03:51:28.739078    3696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9604
	I0719 03:51:28.760168    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
	I0719 03:51:28.810185    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts --> /etc/test/nested/copy/9604/hosts (40 bytes)
	I0719 03:51:28.855606    3696 start.go:296] duration metric: took 5.0719507s for postStartSetup
	I0719 03:51:28.855606    3696 fix.go:56] duration metric: took 53.165288s for fixHost
	I0719 03:51:28.855606    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:31.033424    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:31.033992    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:31.033992    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:33.661469    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:33.661469    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:33.666391    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:33.667164    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:33.667164    3696 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 03:51:33.803547    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721361093.813306008
	
	I0719 03:51:33.803653    3696 fix.go:216] guest clock: 1721361093.813306008
	I0719 03:51:33.803653    3696 fix.go:229] Guest: 2024-07-19 03:51:33.813306008 +0000 UTC Remote: 2024-07-19 03:51:28.8556061 +0000 UTC m=+59.006897101 (delta=4.957699908s)
	I0719 03:51:33.803796    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:35.994681    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:35.995703    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:35.995726    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:38.620465    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:38.620535    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:38.625233    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:38.625457    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:38.625457    3696 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721361093
	I0719 03:51:38.774641    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 03:51:33 UTC 2024
	
	I0719 03:51:38.774641    3696 fix.go:236] clock set: Fri Jul 19 03:51:33 UTC 2024
	 (err=<nil>)
	I0719 03:51:38.774738    3696 start.go:83] releasing machines lock for "functional-149600", held for 1m3.0853019s
	I0719 03:51:38.774962    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:40.997351    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:40.997570    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:40.997570    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:43.631283    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:43.631283    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:43.635455    3696 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 03:51:43.635455    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:43.646136    3696 ssh_runner.go:195] Run: cat /version.json
	I0719 03:51:43.646827    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:45.897790    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:45.898865    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:45.898924    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:45.905632    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:45.905632    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:45.906182    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:48.659880    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:48.659880    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:48.661005    3696 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 03:51:48.685815    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:48.685890    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:48.685890    3696 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 03:51:48.760223    3696 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0719 03:51:48.760223    3696 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1247074s)
	W0719 03:51:48.760467    3696 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 03:51:48.777389    3696 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0719 03:51:48.778342    3696 ssh_runner.go:235] Completed: cat /version.json: (5.1321451s)
	I0719 03:51:48.790179    3696 ssh_runner.go:195] Run: systemctl --version
	I0719 03:51:48.799025    3696 command_runner.go:130] > systemd 252 (252)
	I0719 03:51:48.799025    3696 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0719 03:51:48.809673    3696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 03:51:48.817402    3696 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0719 03:51:48.818131    3696 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 03:51:48.831435    3696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 03:51:48.850859    3696 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 03:51:48.850859    3696 start.go:495] detecting cgroup driver to use...
	I0719 03:51:48.851103    3696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0719 03:51:48.877177    3696 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 03:51:48.877177    3696 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 03:51:48.893340    3696 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0719 03:51:48.904541    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 03:51:48.935991    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 03:51:48.954279    3696 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 03:51:48.967927    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 03:51:48.997865    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 03:51:49.026438    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 03:51:49.072524    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 03:51:49.117543    3696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 03:51:49.154251    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 03:51:49.188018    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 03:51:49.222803    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 03:51:49.261427    3696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 03:51:49.282134    3696 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 03:51:49.294367    3696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 03:51:49.330587    3696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:51:49.594056    3696 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 03:51:49.632649    3696 start.go:495] detecting cgroup driver to use...
	I0719 03:51:49.645484    3696 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 03:51:49.668125    3696 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0719 03:51:49.668402    3696 command_runner.go:130] > [Unit]
	I0719 03:51:49.668402    3696 command_runner.go:130] > Description=Docker Application Container Engine
	I0719 03:51:49.668402    3696 command_runner.go:130] > Documentation=https://docs.docker.com
	I0719 03:51:49.668402    3696 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0719 03:51:49.668497    3696 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0719 03:51:49.668497    3696 command_runner.go:130] > StartLimitBurst=3
	I0719 03:51:49.668497    3696 command_runner.go:130] > StartLimitIntervalSec=60
	I0719 03:51:49.668497    3696 command_runner.go:130] > [Service]
	I0719 03:51:49.668497    3696 command_runner.go:130] > Type=notify
	I0719 03:51:49.668497    3696 command_runner.go:130] > Restart=on-failure
	I0719 03:51:49.668497    3696 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0719 03:51:49.668497    3696 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0719 03:51:49.668497    3696 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0719 03:51:49.668497    3696 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0719 03:51:49.668497    3696 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0719 03:51:49.668497    3696 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0719 03:51:49.668497    3696 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0719 03:51:49.668497    3696 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0719 03:51:49.668497    3696 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0719 03:51:49.668497    3696 command_runner.go:130] > ExecStart=
	I0719 03:51:49.668497    3696 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0719 03:51:49.668497    3696 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0719 03:51:49.668497    3696 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0719 03:51:49.668497    3696 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0719 03:51:49.668497    3696 command_runner.go:130] > LimitNOFILE=infinity
	I0719 03:51:49.668497    3696 command_runner.go:130] > LimitNPROC=infinity
	I0719 03:51:49.668497    3696 command_runner.go:130] > LimitCORE=infinity
	I0719 03:51:49.668497    3696 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0719 03:51:49.668497    3696 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0719 03:51:49.668497    3696 command_runner.go:130] > TasksMax=infinity
	I0719 03:51:49.668497    3696 command_runner.go:130] > TimeoutStartSec=0
	I0719 03:51:49.668497    3696 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0719 03:51:49.668497    3696 command_runner.go:130] > Delegate=yes
	I0719 03:51:49.669031    3696 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0719 03:51:49.669031    3696 command_runner.go:130] > KillMode=process
	I0719 03:51:49.669031    3696 command_runner.go:130] > [Install]
	I0719 03:51:49.669031    3696 command_runner.go:130] > WantedBy=multi-user.target
	I0719 03:51:49.680959    3696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 03:51:49.714100    3696 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 03:51:49.772216    3696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 03:51:49.806868    3696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 03:51:49.828840    3696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 03:51:49.861009    3696 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0719 03:51:49.874179    3696 ssh_runner.go:195] Run: which cri-dockerd
	I0719 03:51:49.879587    3696 command_runner.go:130] > /usr/bin/cri-dockerd
	I0719 03:51:49.890138    3696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 03:51:49.907472    3696 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 03:51:49.956150    3696 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 03:51:50.235400    3696 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 03:51:50.503397    3696 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 03:51:50.503594    3696 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 03:51:50.548434    3696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:51:50.826918    3696 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 03:53:02.223808    3696 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0719 03:53:02.223949    3696 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0719 03:53:02.224012    3696 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3961883s)
	I0719 03:53:02.236953    3696 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0719 03:53:02.270395    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	I0719 03:53:02.270395    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.531914350Z" level=info msg="Starting up"
	I0719 03:53:02.270707    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.534422132Z" level=info msg="containerd not running, starting managed containerd"
	I0719 03:53:02.270707    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.535803677Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=684
	I0719 03:53:02.270775    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.567717825Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0719 03:53:02.270775    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594617108Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0719 03:53:02.270775    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594655809Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0719 03:53:02.270913    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594718511Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0719 03:53:02.270913    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594736112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.270913    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594817914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.270991    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595026521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.270991    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595269429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.271066    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595407134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271066    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595431535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.271066    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595445135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271174    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595540038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271174    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595881749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271283    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598812246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.271283    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598917149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271348    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599162457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.271348    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599284261Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0719 03:53:02.271420    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599462867Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0719 03:53:02.271420    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599605372Z" level=info msg="metadata content store policy set" policy=shared
	I0719 03:53:02.271420    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625338316Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0719 03:53:02.271510    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625549423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0719 03:53:02.271510    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625577124Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0719 03:53:02.271510    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625596425Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0719 03:53:02.271510    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625614725Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0719 03:53:02.271580    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625734329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0719 03:53:02.271580    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626111642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.271580    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626552556Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.271651    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626708661Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0719 03:53:02.271651    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626731962Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0719 03:53:02.271722    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626749163Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271722    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626764763Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271722    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626779864Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271793    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626807165Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271793    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626826665Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271793    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626842566Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271879    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626857266Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271879    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626871767Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271983    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626901168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.271983    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626925168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.271983    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626942469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626958269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626972470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626986970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627018171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627050773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627067473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272189    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627087974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272189    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627102874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272189    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627118075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272189    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627133475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272278    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627151576Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0719 03:53:02.272278    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627179977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272278    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627207478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272385    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627223378Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0719 03:53:02.272385    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627308681Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0719 03:53:02.272385    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627497987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0719 03:53:02.272456    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627603491Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0719 03:53:02.272456    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627628491Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0719 03:53:02.272456    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627642192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272527    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627659693Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0719 03:53:02.272527    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627677693Z" level=info msg="NRI interface is disabled by configuration."
	I0719 03:53:02.272527    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628139708Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0719 03:53:02.272598    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628464419Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0719 03:53:02.272598    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628586223Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0719 03:53:02.272598    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628648825Z" level=info msg="containerd successfully booted in 0.062295s"
	I0719 03:53:02.272668    3696 command_runner.go:130] > Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.605880874Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0719 03:53:02.272668    3696 command_runner.go:130] > Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.640734047Z" level=info msg="Loading containers: start."
	I0719 03:53:02.272741    3696 command_runner.go:130] > Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.813575066Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0719 03:53:02.272741    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.031273218Z" level=info msg="Loading containers: done."
	I0719 03:53:02.272741    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052569890Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052711603Z" level=info msg="Daemon has completed initialization"
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174428772Z" level=info msg="API listen on /var/run/docker.sock"
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 systemd[1]: Started Docker Application Container Engine.
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174659093Z" level=info msg="API listen on [::]:2376"
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	I0719 03:53:02.272906    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.327916124Z" level=info msg="Processing signal 'terminated'"
	I0719 03:53:02.272906    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.330803748Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0719 03:53:02.272906    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332114659Z" level=info msg="Daemon shutdown complete"
	I0719 03:53:02.272978    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332413462Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0719 03:53:02.273055    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332761765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0719 03:53:02.273055    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	I0719 03:53:02.273055    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	I0719 03:53:02.273055    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	I0719 03:53:02.273128    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.396667332Z" level=info msg="Starting up"
	I0719 03:53:02.273128    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.397798042Z" level=info msg="containerd not running, starting managed containerd"
	I0719 03:53:02.273201    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.402462181Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1096
	I0719 03:53:02.273201    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.432470534Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0719 03:53:02.273273    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459514962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0719 03:53:02.273273    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459615563Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0719 03:53:02.273273    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459667563Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0719 03:53:02.273345    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459682563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273345    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459912965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273418    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459936465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273418    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460088967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273418    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460343269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273495    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460374469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273495    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460396969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273495    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460425770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273569    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460819273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273569    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463853798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273642    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463983400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273642    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464200501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464295002Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464331702Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464352803Z" level=info msg="metadata content store policy set" policy=shared
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464795906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464850207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464884207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0719 03:53:02.273929    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464929707Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0719 03:53:02.273929    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464948008Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0719 03:53:02.273929    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465078809Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0719 03:53:02.273929    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465467012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.274022    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465770315Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.274055    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465863515Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465884716Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465898416Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465911416Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465922816Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465936016Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465964116Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465979716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465991216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466002317Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466032417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466048417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466060817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466073817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466093917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466108217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466120718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466132618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466145718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466159818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466170818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466182018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466193718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466207818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466226918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466239919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466250719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466362320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466382920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466470120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0719 03:53:02.274821    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466490821Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0719 03:53:02.274821    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466502121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274898    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466521321Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0719 03:53:02.274898    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466787523Z" level=info msg="NRI interface is disabled by configuration."
	I0719 03:53:02.274898    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467170726Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0719 03:53:02.274898    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467422729Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0719 03:53:02.274989    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467502129Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0719 03:53:02.274989    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467596330Z" level=info msg="containerd successfully booted in 0.035978s"
	I0719 03:53:02.274989    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.446816884Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0719 03:53:02.275065    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.479266357Z" level=info msg="Loading containers: start."
	I0719 03:53:02.275065    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.611087768Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0719 03:53:02.275139    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.727699751Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0719 03:53:02.275139    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.826817487Z" level=info msg="Loading containers: done."
	I0719 03:53:02.275139    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851788197Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0719 03:53:02.275139    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851961999Z" level=info msg="Daemon has completed initialization"
	I0719 03:53:02.275211    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902179022Z" level=info msg="API listen on /var/run/docker.sock"
	I0719 03:53:02.275211    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 systemd[1]: Started Docker Application Container Engine.
	I0719 03:53:02.275211    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902385724Z" level=info msg="API listen on [::]:2376"
	I0719 03:53:02.275286    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.464303420Z" level=info msg="Processing signal 'terminated'"
	I0719 03:53:02.275286    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466178836Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0719 03:53:02.275286    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466444238Z" level=info msg="Daemon shutdown complete"
	I0719 03:53:02.275358    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466617340Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0719 03:53:02.275358    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466645140Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0719 03:53:02.275358    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	I0719 03:53:02.275430    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	I0719 03:53:02.275430    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	I0719 03:53:02.275430    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	I0719 03:53:02.275557    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.526170370Z" level=info msg="Starting up"
	I0719 03:53:02.275580    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.527875185Z" level=info msg="containerd not running, starting managed containerd"
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.529085595Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1449
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.561806771Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.588986100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589119201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589175201Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589189602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589217102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589231002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589372603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589466304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589487004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589498304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589522204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589693506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.592940233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593046334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593164935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593288836Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593325336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593343737Z" level=info msg="metadata content store policy set" policy=shared
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593655039Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593711040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593798040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593840041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593855841Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593915841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594246644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594583947Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594609647Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594625347Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594640447Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276335    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594659648Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276335    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594674648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594689748Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594715248Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594831949Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594864649Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594894750Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594912750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594927150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594938550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594949850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594961050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594988850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594999351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595010451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595022151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595034451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595044251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595054151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595064451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595080551Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595100051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595112651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595122752Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595253153Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595360854Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595377754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595405554Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595414854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595426254Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595435454Z" level=info msg="NRI interface is disabled by configuration."
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595711057Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595836558Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595937958Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595991659Z" level=info msg="containerd successfully booted in 0.035148s"
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:45 functional-149600 dockerd[1443]: time="2024-07-19T03:49:45.571450281Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0719 03:53:02.277129    3696 command_runner.go:130] > Jul 19 03:49:48 functional-149600 dockerd[1443]: time="2024-07-19T03:49:48.883728000Z" level=info msg="Loading containers: start."
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.006401134Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.127192752Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.218929925Z" level=info msg="Loading containers: done."
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249486583Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249683984Z" level=info msg="Daemon has completed initialization"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299922608Z" level=info msg="API listen on /var/run/docker.sock"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 systemd[1]: Started Docker Application Container Engine.
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299991408Z" level=info msg="API listen on [::]:2376"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812314634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812468840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.813783594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.814181811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.823808405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826750026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826767127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826866331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899025089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899127893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899277199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899669815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918254477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918562790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.920597373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.921124695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387701734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387801838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387829539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387963045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436646441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436931752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437090859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437275166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539345743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539671255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.540445185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.541481126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550468276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550879792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.551210305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.555850986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.238972986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239834399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239916700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.240127804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589855933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589966535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589987335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.590436642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002502056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278448    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002639758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278487    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002654558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278487    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.003059965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278487    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053221935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053490639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053805144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.054875960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794781741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794871142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794886242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794980442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806139221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806918426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807029827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807551631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.625163713Z" level=info msg="ignoring event" container=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.629961233Z" level=info msg="shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631114094Z" level=warning msg="cleaning up after shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631402359Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.674442159Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:50:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.838943371Z" level=info msg="ignoring event" container=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.839886257Z" level=info msg="shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840028839Z" level=warning msg="cleaning up after shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840046637Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303237678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303415569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.304773802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.305273178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593684961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593784156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593803755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.279165    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593913350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:50 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:50 functional-149600 dockerd[1443]: time="2024-07-19T03:51:50.861615472Z" level=info msg="Processing signal 'terminated'"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079285636Z" level=info msg="shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079436436Z" level=warning msg="cleaning up after shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079453436Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.080991335Z" level=info msg="ignoring event" container=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.090996234Z" level=info msg="ignoring event" container=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091838634Z" level=info msg="shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091953834Z" level=warning msg="cleaning up after shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091968234Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.112734230Z" level=info msg="ignoring event" container=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116127330Z" level=info msg="shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116200030Z" level=warning msg="cleaning up after shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116210930Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116537230Z" level=info msg="shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116585530Z" level=warning msg="cleaning up after shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116614030Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116823530Z" level=info msg="ignoring event" container=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116849530Z" level=info msg="ignoring event" container=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116946330Z" level=info msg="ignoring event" container=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116988930Z" level=info msg="ignoring event" container=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122714429Z" level=info msg="shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122848129Z" level=warning msg="cleaning up after shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122861729Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128254128Z" level=info msg="shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128388728Z" level=warning msg="cleaning up after shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128443128Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131550327Z" level=info msg="shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131620327Z" level=warning msg="cleaning up after shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131665527Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148015624Z" level=info msg="shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148155124Z" level=warning msg="cleaning up after shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148209624Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182402919Z" level=info msg="shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182503119Z" level=warning msg="cleaning up after shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182514319Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183465819Z" level=info msg="shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183548319Z" level=warning msg="cleaning up after shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183560019Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185575018Z" level=info msg="ignoring event" container=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185722318Z" level=info msg="ignoring event" container=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185758518Z" level=info msg="ignoring event" container=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185811118Z" level=info msg="ignoring event" container=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185852418Z" level=info msg="ignoring event" container=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186041918Z" level=info msg="shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186095318Z" level=warning msg="cleaning up after shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186139118Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187552418Z" level=info msg="shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187672518Z" level=warning msg="cleaning up after shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187687018Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987746429Z" level=info msg="shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987797629Z" level=warning msg="cleaning up after shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987859329Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:55 functional-149600 dockerd[1443]: time="2024-07-19T03:51:55.988258129Z" level=info msg="ignoring event" container=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:56 functional-149600 dockerd[1449]: time="2024-07-19T03:51:56.011512525Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:51:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.013086308Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.070091705Z" level=info msg="ignoring event" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.070778533Z" level=info msg="shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071068387Z" level=warning msg="cleaning up after shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071124597Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.147257850Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148746827Z" level=info msg="Daemon shutdown complete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148999274Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.149087991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Consumed 5.480s CPU time.
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 dockerd[4315]: time="2024-07-19T03:52:02.207309394Z" level=info msg="Starting up"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:53:02 functional-149600 dockerd[4315]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:53:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	I0719 03:53:02.309205    3696 out.go:177] 
	W0719 03:53:02.311207    3696 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 03:48:58 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.531914350Z" level=info msg="Starting up"
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.534422132Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.535803677Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=684
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.567717825Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594617108Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594655809Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594718511Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594736112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594817914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595026521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595269429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595407134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595431535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595445135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595540038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595881749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598812246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598917149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599162457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599284261Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599462867Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599605372Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625338316Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625549423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625577124Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625596425Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625614725Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625734329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626111642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626552556Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626708661Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626731962Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626749163Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626764763Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626779864Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626807165Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626826665Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626842566Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626857266Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626871767Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626901168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626925168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626942469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626958269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626972470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626986970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627018171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627050773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627067473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627087974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627102874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627118075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627133475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627151576Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627179977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627207478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627223378Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627308681Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627497987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627603491Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627628491Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627642192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627659693Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627677693Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628139708Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628464419Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628586223Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628648825Z" level=info msg="containerd successfully booted in 0.062295s"
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.605880874Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.640734047Z" level=info msg="Loading containers: start."
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.813575066Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.031273218Z" level=info msg="Loading containers: done."
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052569890Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052711603Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174428772Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:00 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174659093Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:31 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.327916124Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.330803748Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332114659Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332413462Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332761765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:49:32 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:49:32 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:49:32 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.396667332Z" level=info msg="Starting up"
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.397798042Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.402462181Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1096
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.432470534Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459514962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459615563Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459667563Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459682563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459912965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459936465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460088967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460343269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460374469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460396969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460425770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460819273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463853798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463983400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464200501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464295002Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464331702Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464352803Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464795906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464850207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464884207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464929707Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464948008Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465078809Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465467012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465770315Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465863515Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465884716Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465898416Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465911416Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465922816Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465936016Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465964116Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465979716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465991216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466002317Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466032417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466048417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466060817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466073817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466093917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466108217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466120718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466132618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466145718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466159818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466170818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466182018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466193718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466207818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466226918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466239919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466250719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466362320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466382920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466470120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466490821Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466502121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466521321Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466787523Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467170726Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467422729Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467502129Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467596330Z" level=info msg="containerd successfully booted in 0.035978s"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.446816884Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.479266357Z" level=info msg="Loading containers: start."
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.611087768Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.727699751Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.826817487Z" level=info msg="Loading containers: done."
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851788197Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851961999Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902179022Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:33 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902385724Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.464303420Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466178836Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466444238Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466617340Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466645140Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:49:43 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:49:44 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:49:44 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:49:44 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.526170370Z" level=info msg="Starting up"
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.527875185Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.529085595Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1449
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.561806771Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.588986100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589119201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589175201Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589189602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589217102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589231002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589372603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589466304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589487004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589498304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589522204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589693506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.592940233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593046334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593164935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593288836Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593325336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593343737Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593655039Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593711040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593798040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593840041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593855841Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593915841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594246644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594583947Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594609647Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594625347Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594640447Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594659648Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594674648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594689748Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594715248Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594831949Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594864649Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594894750Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594912750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594927150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594938550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594949850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594961050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594988850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594999351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595010451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595022151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595034451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595044251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595054151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595064451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595080551Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595100051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595112651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595122752Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595253153Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595360854Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595377754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595405554Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595414854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595426254Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595435454Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595711057Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595836558Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595937958Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595991659Z" level=info msg="containerd successfully booted in 0.035148s"
	Jul 19 03:49:45 functional-149600 dockerd[1443]: time="2024-07-19T03:49:45.571450281Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:49:48 functional-149600 dockerd[1443]: time="2024-07-19T03:49:48.883728000Z" level=info msg="Loading containers: start."
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.006401134Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.127192752Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.218929925Z" level=info msg="Loading containers: done."
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249486583Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249683984Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299922608Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:49 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299991408Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812314634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812468840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.813783594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.814181811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.823808405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826750026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826767127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826866331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899025089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899127893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899277199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899669815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918254477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918562790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.920597373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.921124695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387701734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387801838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387829539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387963045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436646441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436931752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437090859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437275166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539345743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539671255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.540445185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.541481126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550468276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550879792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.551210305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.555850986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.238972986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239834399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239916700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.240127804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589855933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589966535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589987335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.590436642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002502056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002639758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002654558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.003059965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053221935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053490639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053805144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.054875960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794781741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794871142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794886242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794980442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806139221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806918426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807029827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807551631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.625163713Z" level=info msg="ignoring event" container=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.629961233Z" level=info msg="shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631114094Z" level=warning msg="cleaning up after shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631402359Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.674442159Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:50:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.838943371Z" level=info msg="ignoring event" container=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.839886257Z" level=info msg="shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840028839Z" level=warning msg="cleaning up after shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840046637Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303237678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303415569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.304773802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.305273178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593684961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593784156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593803755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593913350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:51:50 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:51:50 functional-149600 dockerd[1443]: time="2024-07-19T03:51:50.861615472Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079285636Z" level=info msg="shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079436436Z" level=warning msg="cleaning up after shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079453436Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.080991335Z" level=info msg="ignoring event" container=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.090996234Z" level=info msg="ignoring event" container=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091838634Z" level=info msg="shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091953834Z" level=warning msg="cleaning up after shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091968234Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.112734230Z" level=info msg="ignoring event" container=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116127330Z" level=info msg="shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116200030Z" level=warning msg="cleaning up after shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116210930Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116537230Z" level=info msg="shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116585530Z" level=warning msg="cleaning up after shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116614030Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116823530Z" level=info msg="ignoring event" container=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116849530Z" level=info msg="ignoring event" container=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116946330Z" level=info msg="ignoring event" container=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116988930Z" level=info msg="ignoring event" container=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122714429Z" level=info msg="shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122848129Z" level=warning msg="cleaning up after shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122861729Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128254128Z" level=info msg="shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128388728Z" level=warning msg="cleaning up after shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128443128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131550327Z" level=info msg="shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131620327Z" level=warning msg="cleaning up after shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131665527Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148015624Z" level=info msg="shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148155124Z" level=warning msg="cleaning up after shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148209624Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182402919Z" level=info msg="shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182503119Z" level=warning msg="cleaning up after shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182514319Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183465819Z" level=info msg="shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183548319Z" level=warning msg="cleaning up after shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183560019Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185575018Z" level=info msg="ignoring event" container=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185722318Z" level=info msg="ignoring event" container=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185758518Z" level=info msg="ignoring event" container=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185811118Z" level=info msg="ignoring event" container=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185852418Z" level=info msg="ignoring event" container=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186041918Z" level=info msg="shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186095318Z" level=warning msg="cleaning up after shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186139118Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187552418Z" level=info msg="shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187672518Z" level=warning msg="cleaning up after shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187687018Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987746429Z" level=info msg="shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987797629Z" level=warning msg="cleaning up after shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987859329Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1443]: time="2024-07-19T03:51:55.988258129Z" level=info msg="ignoring event" container=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:56 functional-149600 dockerd[1449]: time="2024-07-19T03:51:56.011512525Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:51:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.013086308Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.070091705Z" level=info msg="ignoring event" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.070778533Z" level=info msg="shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071068387Z" level=warning msg="cleaning up after shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071124597Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.147257850Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148746827Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148999274Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.149087991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:52:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Consumed 5.480s CPU time.
	Jul 19 03:52:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:52:02 functional-149600 dockerd[4315]: time="2024-07-19T03:52:02.207309394Z" level=info msg="Starting up"
	Jul 19 03:53:02 functional-149600 dockerd[4315]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:53:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0719 03:53:02.313026    3696 out.go:239] * 
	W0719 03:53:02.314501    3696 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 03:53:02.320481    3696 out.go:177] 
	
	
	==> Docker <==
	Jul 19 04:13:07 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:13:07Z" level=error msg="error getting RW layer size for container ID '896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:13:07 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:13:07Z" level=error msg="Set backoffDuration to : 1m0s for container ID '896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f'"
	Jul 19 04:13:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Jul 19 04:13:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:13:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:13:07 functional-149600 dockerd[9440]: time="2024-07-19T04:13:07.376165478Z" level=info msg="Starting up"
	Jul 19 04:14:07 functional-149600 dockerd[9440]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:14:07 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:14:07Z" level=error msg="error getting RW layer size for container ID '2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:14:07 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:14:07Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f'"
	Jul 19 04:14:07 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:14:07Z" level=error msg="error getting RW layer size for container ID '905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:14:07 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:14:07Z" level=error msg="Set backoffDuration to : 1m0s for container ID '905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2'"
	Jul 19 04:14:07 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:14:07Z" level=error msg="error getting RW layer size for container ID 'db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:14:07 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:14:07Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703'"
	Jul 19 04:14:07 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:14:07Z" level=error msg="error getting RW layer size for container ID '86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:14:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:14:07 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:14:07Z" level=error msg="Set backoffDuration to : 1m0s for container ID '86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d'"
	Jul 19 04:14:07 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:14:07Z" level=error msg="error getting RW layer size for container ID '4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:14:07 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:14:07Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26'"
	Jul 19 04:14:07 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:14:07Z" level=error msg="error getting RW layer size for container ID '896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:14:07 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:14:07Z" level=error msg="Set backoffDuration to : 1m0s for container ID '896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f'"
	Jul 19 04:14:07 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:14:07Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 19 04:14:07 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:14:07Z" level=error msg="error getting RW layer size for container ID '73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:14:07 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:14:07Z" level=error msg="Set backoffDuration to : 1m0s for container ID '73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0'"
	Jul 19 04:14:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:14:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-19T04:14:09Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.525666] systemd-fstab-generator[1054]: Ignoring "noauto" option for root device
	[  +0.198552] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	[  +0.233106] systemd-fstab-generator[1081]: Ignoring "noauto" option for root device
	[  +2.882289] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.217497] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +0.196783] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[  +0.258312] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +8.589795] systemd-fstab-generator[1435]: Ignoring "noauto" option for root device
	[  +0.109572] kauditd_printk_skb: 202 callbacks suppressed
	[  +5.479934] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.746047] systemd-fstab-generator[1680]: Ignoring "noauto" option for root device
	[  +6.463791] systemd-fstab-generator[1887]: Ignoring "noauto" option for root device
	[  +0.101637] kauditd_printk_skb: 48 callbacks suppressed
	[Jul19 03:50] systemd-fstab-generator[2289]: Ignoring "noauto" option for root device
	[  +0.137056] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.913934] systemd-fstab-generator[2516]: Ignoring "noauto" option for root device
	[  +0.188713] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.060318] hrtimer: interrupt took 3867561 ns
	[  +7.580998] kauditd_printk_skb: 90 callbacks suppressed
	[Jul19 03:51] systemd-fstab-generator[3837]: Ignoring "noauto" option for root device
	[  +0.149840] kauditd_printk_skb: 10 callbacks suppressed
	[  +0.466272] systemd-fstab-generator[3873]: Ignoring "noauto" option for root device
	[  +0.296379] systemd-fstab-generator[3899]: Ignoring "noauto" option for root device
	[  +0.316733] systemd-fstab-generator[3913]: Ignoring "noauto" option for root device
	[  +5.318922] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 04:15:07 up 27 min,  0 users,  load average: 0.06, 0.04, 0.04
	Linux functional-149600 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 19 04:15:05 functional-149600 kubelet[2296]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:15:05 functional-149600 kubelet[2296]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:15:05 functional-149600 kubelet[2296]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:15:05 functional-149600 kubelet[2296]: E0719 04:15:05.475350    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?resourceVersion=0&timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 04:15:05 functional-149600 kubelet[2296]: E0719 04:15:05.476213    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 04:15:05 functional-149600 kubelet[2296]: E0719 04:15:05.477227    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 04:15:05 functional-149600 kubelet[2296]: E0719 04:15:05.478297    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 04:15:05 functional-149600 kubelet[2296]: E0719 04:15:05.479428    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 04:15:05 functional-149600 kubelet[2296]: E0719 04:15:05.479479    2296 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 19 04:15:06 functional-149600 kubelet[2296]: E0719 04:15:06.156960    2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused" interval="7s"
	Jul 19 04:15:07 functional-149600 kubelet[2296]: E0719 04:15:07.614760    2296 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 19 04:15:07 functional-149600 kubelet[2296]: E0719 04:15:07.615173    2296 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:15:07 functional-149600 kubelet[2296]: E0719 04:15:07.615363    2296 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:15:07 functional-149600 kubelet[2296]: E0719 04:15:07.617509    2296 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 19 04:15:07 functional-149600 kubelet[2296]: E0719 04:15:07.617542    2296 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:15:07 functional-149600 kubelet[2296]: I0719 04:15:07.617556    2296 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:15:07 functional-149600 kubelet[2296]: E0719 04:15:07.617581    2296 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:15:07 functional-149600 kubelet[2296]: E0719 04:15:07.617603    2296 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:15:07 functional-149600 kubelet[2296]: E0719 04:15:07.618127    2296 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 19 04:15:07 functional-149600 kubelet[2296]: E0719 04:15:07.618305    2296 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:15:07 functional-149600 kubelet[2296]: E0719 04:15:07.625311    2296 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 19 04:15:07 functional-149600 kubelet[2296]: E0719 04:15:07.625452    2296 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:15:07 functional-149600 kubelet[2296]: E0719 04:15:07.627190    2296 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 19 04:15:07 functional-149600 kubelet[2296]: E0719 04:15:07.627461    2296 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 19 04:15:07 functional-149600 kubelet[2296]: E0719 04:15:07.627710    2296 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:12:42.817797   12932 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0719 04:13:07.132608   12932 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:13:07.166131   12932 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:13:07.197228   12932 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:13:07.231597   12932 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:13:07.264197   12932 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:13:07.296921   12932 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:14:07.399071   12932 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:14:07.435268   12932 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-149600 -n functional-149600
E0719 04:15:10.094516    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-149600 -n functional-149600: exit status 2 (12.203858s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:15:08.471826    5040 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-149600" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (180.75s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (181.02s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-149600 -n functional-149600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-149600 -n functional-149600: exit status 2 (12.388681s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:15:20.658290    4964 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-149600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-149600 logs -n 25: (2m35.6353458s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-907600 --log_dir                                     | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-907600 --log_dir                                     | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-907600 --log_dir                                     | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-907600 --log_dir                                     | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                     | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                     | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                     | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-907600                                            | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	| start   | -p functional-149600                                        | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:50 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-149600                                        | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:50 UTC |                     |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache add                                 | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:59 UTC | 19 Jul 24 04:01 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache add                                 | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:01 UTC | 19 Jul 24 04:03 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache add                                 | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:03 UTC | 19 Jul 24 04:05 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache add                                 | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:05 UTC | 19 Jul 24 04:06 UTC |
	|         | minikube-local-cache-test:functional-149600                 |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache delete                              | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC | 19 Jul 24 04:06 UTC |
	|         | minikube-local-cache-test:functional-149600                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC | 19 Jul 24 04:06 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC | 19 Jul 24 04:06 UTC |
	| ssh     | functional-149600 ssh sudo                                  | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC |                     |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-149600                                           | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC |                     |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-149600 ssh                                       | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:07 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache reload                              | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:07 UTC | 19 Jul 24 04:09 UTC |
	| ssh     | functional-149600 ssh                                       | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:09 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:09 UTC | 19 Jul 24 04:09 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:09 UTC | 19 Jul 24 04:09 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-149600 kubectl --                                | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:12 UTC |                     |
	|         | --context functional-149600                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 03:50:30
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 03:50:30.014967    3696 out.go:291] Setting OutFile to fd 736 ...
	I0719 03:50:30.015716    3696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:50:30.015716    3696 out.go:304] Setting ErrFile to fd 920...
	I0719 03:50:30.015716    3696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:50:30.039559    3696 out.go:298] Setting JSON to false
	I0719 03:50:30.043125    3696 start.go:129] hostinfo: {"hostname":"minikube6","uptime":20056,"bootTime":1721340973,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0719 03:50:30.043125    3696 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 03:50:30.049078    3696 out.go:177] * [functional-149600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 03:50:30.054622    3696 notify.go:220] Checking for updates...
	I0719 03:50:30.058333    3696 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 03:50:30.061118    3696 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 03:50:30.064121    3696 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0719 03:50:30.066177    3696 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 03:50:30.069184    3696 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 03:50:30.074042    3696 config.go:182] Loaded profile config "functional-149600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 03:50:30.074369    3696 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:50:35.621705    3696 out.go:177] * Using the hyperv driver based on existing profile
	I0719 03:50:35.625671    3696 start.go:297] selected driver: hyperv
	I0719 03:50:35.625671    3696 start.go:901] validating driver "hyperv" against &{Name:functional-149600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-149600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.160.82 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:50:35.625671    3696 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 03:50:35.676160    3696 cni.go:84] Creating CNI manager for ""
	I0719 03:50:35.676274    3696 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 03:50:35.676342    3696 start.go:340] cluster config:
	{Name:functional-149600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-149600 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.160.82 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:50:35.676342    3696 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 03:50:35.683357    3696 out.go:177] * Starting "functional-149600" primary control-plane node in "functional-149600" cluster
	I0719 03:50:35.686075    3696 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 03:50:35.686075    3696 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 03:50:35.686075    3696 cache.go:56] Caching tarball of preloaded images
	I0719 03:50:35.686075    3696 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 03:50:35.686075    3696 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 03:50:35.686926    3696 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-149600\config.json ...
	I0719 03:50:35.688692    3696 start.go:360] acquireMachinesLock for functional-149600: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 03:50:35.688692    3696 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-149600"
	I0719 03:50:35.689690    3696 start.go:96] Skipping create...Using existing machine configuration
	I0719 03:50:35.689690    3696 fix.go:54] fixHost starting: 
	I0719 03:50:35.689690    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:38.581698    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:38.581801    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:38.581801    3696 fix.go:112] recreateIfNeeded on functional-149600: state=Running err=<nil>
	W0719 03:50:38.581801    3696 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 03:50:38.589005    3696 out.go:177] * Updating the running hyperv "functional-149600" VM ...
	I0719 03:50:38.591394    3696 machine.go:94] provisionDockerMachine start ...
	I0719 03:50:38.591394    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:40.863423    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:40.863423    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:40.863553    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:50:43.589830    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:50:43.589830    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:43.596398    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:50:43.597572    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:50:43.597572    3696 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 03:50:43.733324    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-149600
	
	I0719 03:50:43.733461    3696 buildroot.go:166] provisioning hostname "functional-149600"
	I0719 03:50:43.733530    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:46.004354    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:46.004354    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:46.004354    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:50:48.635705    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:50:48.635705    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:48.641943    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:50:48.642699    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:50:48.642699    3696 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-149600 && echo "functional-149600" | sudo tee /etc/hostname
	I0719 03:50:48.808147    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-149600
	
	I0719 03:50:48.808147    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:50.983670    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:50.983670    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:50.983670    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:50:53.564554    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:50:53.564554    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:53.570500    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:50:53.571029    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:50:53.571029    3696 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-149600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-149600/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-149600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 03:50:53.715932    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 03:50:53.715932    3696 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 03:50:53.715932    3696 buildroot.go:174] setting up certificates
	I0719 03:50:53.715932    3696 provision.go:84] configureAuth start
	I0719 03:50:53.716479    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:50:55.878607    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:50:55.878827    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:55.878961    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:50:58.506063    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:50:58.506342    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:50:58.506342    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:00.678396    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:00.678396    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:00.678789    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:03.274493    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:03.275498    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:03.275498    3696 provision.go:143] copyHostCerts
	I0719 03:51:03.275498    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0719 03:51:03.276037    3696 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0719 03:51:03.276037    3696 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0719 03:51:03.276654    3696 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 03:51:03.277651    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0719 03:51:03.278183    3696 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0719 03:51:03.278183    3696 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0719 03:51:03.278428    3696 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 03:51:03.279156    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0719 03:51:03.279712    3696 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0719 03:51:03.279842    3696 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0719 03:51:03.280165    3696 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 03:51:03.281113    3696 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-149600 san=[127.0.0.1 172.28.160.82 functional-149600 localhost minikube]
	I0719 03:51:03.689682    3696 provision.go:177] copyRemoteCerts
	I0719 03:51:03.703822    3696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 03:51:03.703822    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:05.944447    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:05.945222    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:05.945222    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:08.655742    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:08.656027    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:08.656027    3696 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 03:51:08.767037    3696 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.0631549s)
	I0719 03:51:08.767037    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0719 03:51:08.767037    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 03:51:08.817664    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0719 03:51:08.817664    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 03:51:08.866416    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0719 03:51:08.866625    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 03:51:08.914316    3696 provision.go:87] duration metric: took 15.1982045s to configureAuth
	I0719 03:51:08.914388    3696 buildroot.go:189] setting minikube options for container-runtime
	I0719 03:51:08.914388    3696 config.go:182] Loaded profile config "functional-149600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 03:51:08.915029    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:11.135055    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:11.135661    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:11.135851    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:13.741166    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:13.741166    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:13.746157    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:13.746776    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:13.746776    3696 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 03:51:13.880918    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 03:51:13.880918    3696 buildroot.go:70] root file system type: tmpfs
	I0719 03:51:13.881582    3696 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 03:51:13.881732    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:16.077328    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:16.077328    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:16.078246    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:18.691853    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:18.691853    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:18.698444    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:18.698985    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:18.699205    3696 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 03:51:18.866085    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 03:51:18.866196    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:21.047452    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:21.047757    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:21.047885    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:23.635931    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:23.635931    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:23.641636    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:23.641913    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:23.641913    3696 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 03:51:23.783486    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 03:51:23.783486    3696 machine.go:97] duration metric: took 45.1915583s to provisionDockerMachine
	I0719 03:51:23.783595    3696 start.go:293] postStartSetup for "functional-149600" (driver="hyperv")
	I0719 03:51:23.783595    3696 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 03:51:23.796656    3696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 03:51:23.796656    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:25.981376    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:25.981376    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:25.981376    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:28.598484    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:28.598484    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:28.598544    3696 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 03:51:28.705771    3696 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9090569s)
	I0719 03:51:28.718613    3696 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 03:51:28.725582    3696 command_runner.go:130] > NAME=Buildroot
	I0719 03:51:28.725582    3696 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 03:51:28.725582    3696 command_runner.go:130] > ID=buildroot
	I0719 03:51:28.725582    3696 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 03:51:28.725582    3696 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 03:51:28.725959    3696 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 03:51:28.725959    3696 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 03:51:28.725959    3696 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 03:51:28.727557    3696 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
	I0719 03:51:28.727636    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /etc/ssl/certs/96042.pem
	I0719 03:51:28.728845    3696 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts -> hosts in /etc/test/nested/copy/9604
	I0719 03:51:28.728930    3696 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts -> /etc/test/nested/copy/9604/hosts
	I0719 03:51:28.739078    3696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9604
	I0719 03:51:28.760168    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
	I0719 03:51:28.810185    3696 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts --> /etc/test/nested/copy/9604/hosts (40 bytes)
	I0719 03:51:28.855606    3696 start.go:296] duration metric: took 5.0719507s for postStartSetup
	I0719 03:51:28.855606    3696 fix.go:56] duration metric: took 53.165288s for fixHost
	I0719 03:51:28.855606    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:31.033424    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:31.033992    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:31.033992    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:33.661469    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:33.661469    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:33.666391    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:33.667164    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:33.667164    3696 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 03:51:33.803547    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721361093.813306008
	
	I0719 03:51:33.803653    3696 fix.go:216] guest clock: 1721361093.813306008
	I0719 03:51:33.803653    3696 fix.go:229] Guest: 2024-07-19 03:51:33.813306008 +0000 UTC Remote: 2024-07-19 03:51:28.8556061 +0000 UTC m=+59.006897101 (delta=4.957699908s)
	I0719 03:51:33.803796    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:35.994681    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:35.995703    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:35.995726    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:38.620465    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:38.620535    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:38.625233    3696 main.go:141] libmachine: Using SSH client type: native
	I0719 03:51:38.625457    3696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 03:51:38.625457    3696 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721361093
	I0719 03:51:38.774641    3696 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 03:51:33 UTC 2024
	
	I0719 03:51:38.774641    3696 fix.go:236] clock set: Fri Jul 19 03:51:33 UTC 2024
	 (err=<nil>)
	I0719 03:51:38.774738    3696 start.go:83] releasing machines lock for "functional-149600", held for 1m3.0853019s
	I0719 03:51:38.774962    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:40.997351    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:40.997570    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:40.997570    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:43.631283    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:43.631283    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:43.635455    3696 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 03:51:43.635455    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:43.646136    3696 ssh_runner.go:195] Run: cat /version.json
	I0719 03:51:43.646827    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 03:51:45.897790    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:45.898865    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:45.898924    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:45.905632    3696 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 03:51:45.905632    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:45.906182    3696 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 03:51:48.659880    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:48.659880    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:48.661005    3696 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 03:51:48.685815    3696 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 03:51:48.685890    3696 main.go:141] libmachine: [stderr =====>] : 
	I0719 03:51:48.685890    3696 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 03:51:48.760223    3696 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0719 03:51:48.760223    3696 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1247074s)
	W0719 03:51:48.760467    3696 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 03:51:48.777389    3696 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0719 03:51:48.778342    3696 ssh_runner.go:235] Completed: cat /version.json: (5.1321451s)
	I0719 03:51:48.790179    3696 ssh_runner.go:195] Run: systemctl --version
	I0719 03:51:48.799025    3696 command_runner.go:130] > systemd 252 (252)
	I0719 03:51:48.799025    3696 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0719 03:51:48.809673    3696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 03:51:48.817402    3696 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0719 03:51:48.818131    3696 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 03:51:48.831435    3696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 03:51:48.850859    3696 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 03:51:48.850859    3696 start.go:495] detecting cgroup driver to use...
	I0719 03:51:48.851103    3696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0719 03:51:48.877177    3696 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 03:51:48.877177    3696 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 03:51:48.893340    3696 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0719 03:51:48.904541    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 03:51:48.935991    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 03:51:48.954279    3696 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 03:51:48.967927    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 03:51:48.997865    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 03:51:49.026438    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 03:51:49.072524    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 03:51:49.117543    3696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 03:51:49.154251    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 03:51:49.188018    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 03:51:49.222803    3696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 03:51:49.261427    3696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 03:51:49.282134    3696 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 03:51:49.294367    3696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 03:51:49.330587    3696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:51:49.594056    3696 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 03:51:49.632649    3696 start.go:495] detecting cgroup driver to use...
	I0719 03:51:49.645484    3696 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 03:51:49.668125    3696 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0719 03:51:49.668402    3696 command_runner.go:130] > [Unit]
	I0719 03:51:49.668402    3696 command_runner.go:130] > Description=Docker Application Container Engine
	I0719 03:51:49.668402    3696 command_runner.go:130] > Documentation=https://docs.docker.com
	I0719 03:51:49.668402    3696 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0719 03:51:49.668497    3696 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0719 03:51:49.668497    3696 command_runner.go:130] > StartLimitBurst=3
	I0719 03:51:49.668497    3696 command_runner.go:130] > StartLimitIntervalSec=60
	I0719 03:51:49.668497    3696 command_runner.go:130] > [Service]
	I0719 03:51:49.668497    3696 command_runner.go:130] > Type=notify
	I0719 03:51:49.668497    3696 command_runner.go:130] > Restart=on-failure
	I0719 03:51:49.668497    3696 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0719 03:51:49.668497    3696 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0719 03:51:49.668497    3696 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0719 03:51:49.668497    3696 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0719 03:51:49.668497    3696 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0719 03:51:49.668497    3696 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0719 03:51:49.668497    3696 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0719 03:51:49.668497    3696 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0719 03:51:49.668497    3696 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0719 03:51:49.668497    3696 command_runner.go:130] > ExecStart=
	I0719 03:51:49.668497    3696 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0719 03:51:49.668497    3696 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0719 03:51:49.668497    3696 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0719 03:51:49.668497    3696 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0719 03:51:49.668497    3696 command_runner.go:130] > LimitNOFILE=infinity
	I0719 03:51:49.668497    3696 command_runner.go:130] > LimitNPROC=infinity
	I0719 03:51:49.668497    3696 command_runner.go:130] > LimitCORE=infinity
	I0719 03:51:49.668497    3696 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0719 03:51:49.668497    3696 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0719 03:51:49.668497    3696 command_runner.go:130] > TasksMax=infinity
	I0719 03:51:49.668497    3696 command_runner.go:130] > TimeoutStartSec=0
	I0719 03:51:49.668497    3696 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0719 03:51:49.668497    3696 command_runner.go:130] > Delegate=yes
	I0719 03:51:49.669031    3696 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0719 03:51:49.669031    3696 command_runner.go:130] > KillMode=process
	I0719 03:51:49.669031    3696 command_runner.go:130] > [Install]
	I0719 03:51:49.669031    3696 command_runner.go:130] > WantedBy=multi-user.target
	I0719 03:51:49.680959    3696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 03:51:49.714100    3696 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 03:51:49.772216    3696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 03:51:49.806868    3696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 03:51:49.828840    3696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 03:51:49.861009    3696 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0719 03:51:49.874179    3696 ssh_runner.go:195] Run: which cri-dockerd
	I0719 03:51:49.879587    3696 command_runner.go:130] > /usr/bin/cri-dockerd
	I0719 03:51:49.890138    3696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 03:51:49.907472    3696 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 03:51:49.956150    3696 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 03:51:50.235400    3696 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 03:51:50.503397    3696 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 03:51:50.503594    3696 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 03:51:50.548434    3696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 03:51:50.826918    3696 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 03:53:02.223808    3696 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0719 03:53:02.223949    3696 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0719 03:53:02.224012    3696 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.3961883s)
	I0719 03:53:02.236953    3696 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0719 03:53:02.270395    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	I0719 03:53:02.270395    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.531914350Z" level=info msg="Starting up"
	I0719 03:53:02.270707    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.534422132Z" level=info msg="containerd not running, starting managed containerd"
	I0719 03:53:02.270707    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.535803677Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=684
	I0719 03:53:02.270775    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.567717825Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0719 03:53:02.270775    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594617108Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0719 03:53:02.270775    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594655809Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0719 03:53:02.270913    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594718511Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0719 03:53:02.270913    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594736112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.270913    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594817914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.270991    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595026521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.270991    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595269429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.271066    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595407134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271066    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595431535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.271066    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595445135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271174    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595540038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271174    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595881749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271283    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598812246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.271283    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598917149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.271348    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599162457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.271348    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599284261Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0719 03:53:02.271420    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599462867Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0719 03:53:02.271420    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599605372Z" level=info msg="metadata content store policy set" policy=shared
	I0719 03:53:02.271420    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625338316Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0719 03:53:02.271510    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625549423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0719 03:53:02.271510    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625577124Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0719 03:53:02.271510    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625596425Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0719 03:53:02.271510    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625614725Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0719 03:53:02.271580    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625734329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0719 03:53:02.271580    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626111642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.271580    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626552556Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.271651    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626708661Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0719 03:53:02.271651    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626731962Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0719 03:53:02.271722    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626749163Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271722    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626764763Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271722    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626779864Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271793    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626807165Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271793    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626826665Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271793    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626842566Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271879    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626857266Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271879    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626871767Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.271983    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626901168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.271983    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626925168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.271983    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626942469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626958269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626972470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626986970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627018171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627050773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272058    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627067473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272189    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627087974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272189    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627102874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272189    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627118075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272189    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627133475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272278    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627151576Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0719 03:53:02.272278    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627179977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272278    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627207478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272385    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627223378Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0719 03:53:02.272385    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627308681Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0719 03:53:02.272385    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627497987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0719 03:53:02.272456    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627603491Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0719 03:53:02.272456    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627628491Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0719 03:53:02.272456    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627642192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.272527    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627659693Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0719 03:53:02.272527    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627677693Z" level=info msg="NRI interface is disabled by configuration."
	I0719 03:53:02.272527    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628139708Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0719 03:53:02.272598    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628464419Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0719 03:53:02.272598    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628586223Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0719 03:53:02.272598    3696 command_runner.go:130] > Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628648825Z" level=info msg="containerd successfully booted in 0.062295s"
	I0719 03:53:02.272668    3696 command_runner.go:130] > Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.605880874Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0719 03:53:02.272668    3696 command_runner.go:130] > Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.640734047Z" level=info msg="Loading containers: start."
	I0719 03:53:02.272741    3696 command_runner.go:130] > Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.813575066Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0719 03:53:02.272741    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.031273218Z" level=info msg="Loading containers: done."
	I0719 03:53:02.272741    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052569890Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052711603Z" level=info msg="Daemon has completed initialization"
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174428772Z" level=info msg="API listen on /var/run/docker.sock"
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 systemd[1]: Started Docker Application Container Engine.
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174659093Z" level=info msg="API listen on [::]:2376"
	I0719 03:53:02.272815    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	I0719 03:53:02.272906    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.327916124Z" level=info msg="Processing signal 'terminated'"
	I0719 03:53:02.272906    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.330803748Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0719 03:53:02.272906    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332114659Z" level=info msg="Daemon shutdown complete"
	I0719 03:53:02.272978    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332413462Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0719 03:53:02.273055    3696 command_runner.go:130] > Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332761765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0719 03:53:02.273055    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	I0719 03:53:02.273055    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	I0719 03:53:02.273055    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	I0719 03:53:02.273128    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.396667332Z" level=info msg="Starting up"
	I0719 03:53:02.273128    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.397798042Z" level=info msg="containerd not running, starting managed containerd"
	I0719 03:53:02.273201    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.402462181Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1096
	I0719 03:53:02.273201    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.432470534Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0719 03:53:02.273273    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459514962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0719 03:53:02.273273    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459615563Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0719 03:53:02.273273    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459667563Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0719 03:53:02.273345    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459682563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273345    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459912965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273418    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459936465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273418    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460088967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273418    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460343269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273495    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460374469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273495    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460396969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273495    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460425770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273569    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460819273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273569    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463853798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273642    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463983400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.273642    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464200501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464295002Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464331702Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464352803Z" level=info msg="metadata content store policy set" policy=shared
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464795906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464850207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0719 03:53:02.273713    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464884207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0719 03:53:02.273929    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464929707Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0719 03:53:02.273929    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464948008Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0719 03:53:02.273929    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465078809Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0719 03:53:02.273929    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465467012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.274022    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465770315Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.274055    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465863515Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465884716Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465898416Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465911416Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465922816Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465936016Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465964116Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465979716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465991216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466002317Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466032417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466048417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466060817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466073817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466093917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466108217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466120718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466132618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466145718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466159818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466170818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466182018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274086    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466193718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466207818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466226918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466239919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466250719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466362320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466382920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0719 03:53:02.274666    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466470120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0719 03:53:02.274821    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466490821Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0719 03:53:02.274821    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466502121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.274898    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466521321Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0719 03:53:02.274898    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466787523Z" level=info msg="NRI interface is disabled by configuration."
	I0719 03:53:02.274898    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467170726Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0719 03:53:02.274898    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467422729Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0719 03:53:02.274989    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467502129Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0719 03:53:02.274989    3696 command_runner.go:130] > Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467596330Z" level=info msg="containerd successfully booted in 0.035978s"
	I0719 03:53:02.274989    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.446816884Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0719 03:53:02.275065    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.479266357Z" level=info msg="Loading containers: start."
	I0719 03:53:02.275065    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.611087768Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0719 03:53:02.275139    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.727699751Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0719 03:53:02.275139    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.826817487Z" level=info msg="Loading containers: done."
	I0719 03:53:02.275139    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851788197Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0719 03:53:02.275139    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851961999Z" level=info msg="Daemon has completed initialization"
	I0719 03:53:02.275211    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902179022Z" level=info msg="API listen on /var/run/docker.sock"
	I0719 03:53:02.275211    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 systemd[1]: Started Docker Application Container Engine.
	I0719 03:53:02.275211    3696 command_runner.go:130] > Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902385724Z" level=info msg="API listen on [::]:2376"
	I0719 03:53:02.275286    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.464303420Z" level=info msg="Processing signal 'terminated'"
	I0719 03:53:02.275286    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466178836Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0719 03:53:02.275286    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466444238Z" level=info msg="Daemon shutdown complete"
	I0719 03:53:02.275358    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466617340Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0719 03:53:02.275358    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466645140Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0719 03:53:02.275358    3696 command_runner.go:130] > Jul 19 03:49:43 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	I0719 03:53:02.275430    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	I0719 03:53:02.275430    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	I0719 03:53:02.275430    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	I0719 03:53:02.275557    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.526170370Z" level=info msg="Starting up"
	I0719 03:53:02.275580    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.527875185Z" level=info msg="containerd not running, starting managed containerd"
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.529085595Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1449
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.561806771Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.588986100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589119201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589175201Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589189602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589217102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589231002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589372603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589466304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589487004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589498304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589522204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589693506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.592940233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593046334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593164935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593288836Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593325336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593343737Z" level=info msg="metadata content store policy set" policy=shared
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593655039Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593711040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593798040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0719 03:53:02.275610    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593840041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593855841Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593915841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594246644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594583947Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594609647Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594625347Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0719 03:53:02.276186    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594640447Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276335    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594659648Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276335    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594674648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594689748Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594715248Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594831949Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594864649Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594894750Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594912750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594927150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594938550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594949850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594961050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594988850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594999351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595010451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595022151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595034451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595044251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595054151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595064451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595080551Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595100051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595112651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595122752Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595253153Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595360854Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595377754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595405554Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0719 03:53:02.276367    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595414854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595426254Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595435454Z" level=info msg="NRI interface is disabled by configuration."
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595711057Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595836558Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595937958Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595991659Z" level=info msg="containerd successfully booted in 0.035148s"
	I0719 03:53:02.276944    3696 command_runner.go:130] > Jul 19 03:49:45 functional-149600 dockerd[1443]: time="2024-07-19T03:49:45.571450281Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0719 03:53:02.277129    3696 command_runner.go:130] > Jul 19 03:49:48 functional-149600 dockerd[1443]: time="2024-07-19T03:49:48.883728000Z" level=info msg="Loading containers: start."
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.006401134Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.127192752Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.218929925Z" level=info msg="Loading containers: done."
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249486583Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249683984Z" level=info msg="Daemon has completed initialization"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299922608Z" level=info msg="API listen on /var/run/docker.sock"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 systemd[1]: Started Docker Application Container Engine.
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299991408Z" level=info msg="API listen on [::]:2376"
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812314634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812468840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.813783594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.814181811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.823808405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826750026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826767127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826866331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899025089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277192    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899127893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899277199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899669815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918254477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918562790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277725    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.920597373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.921124695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387701734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387801838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387829539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387963045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436646441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436931752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437090859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437275166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539345743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539671255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.540445185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.541481126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550468276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550879792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.551210305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.555850986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.238972986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239834399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239916700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.240127804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589855933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589966535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589987335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.590436642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.277873    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002502056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278448    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002639758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278487    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002654558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278487    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.003059965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278487    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053221935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053490639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053805144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.054875960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794781741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794871142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794886242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794980442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806139221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806918426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807029827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807551631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.625163713Z" level=info msg="ignoring event" container=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.629961233Z" level=info msg="shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631114094Z" level=warning msg="cleaning up after shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631402359Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.674442159Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:50:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.838943371Z" level=info msg="ignoring event" container=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.839886257Z" level=info msg="shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840028839Z" level=warning msg="cleaning up after shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840046637Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303237678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303415569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.304773802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.305273178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593684961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593784156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0719 03:53:02.278588    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593803755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.279165    3696 command_runner.go:130] > Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593913350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:50 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:50 functional-149600 dockerd[1443]: time="2024-07-19T03:51:50.861615472Z" level=info msg="Processing signal 'terminated'"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079285636Z" level=info msg="shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079436436Z" level=warning msg="cleaning up after shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079453436Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.080991335Z" level=info msg="ignoring event" container=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.090996234Z" level=info msg="ignoring event" container=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091838634Z" level=info msg="shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091953834Z" level=warning msg="cleaning up after shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091968234Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.112734230Z" level=info msg="ignoring event" container=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116127330Z" level=info msg="shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116200030Z" level=warning msg="cleaning up after shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116210930Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116537230Z" level=info msg="shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116585530Z" level=warning msg="cleaning up after shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116614030Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116823530Z" level=info msg="ignoring event" container=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116849530Z" level=info msg="ignoring event" container=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116946330Z" level=info msg="ignoring event" container=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116988930Z" level=info msg="ignoring event" container=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122714429Z" level=info msg="shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122848129Z" level=warning msg="cleaning up after shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122861729Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128254128Z" level=info msg="shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128388728Z" level=warning msg="cleaning up after shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128443128Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131550327Z" level=info msg="shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131620327Z" level=warning msg="cleaning up after shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131665527Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148015624Z" level=info msg="shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148155124Z" level=warning msg="cleaning up after shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148209624Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182402919Z" level=info msg="shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182503119Z" level=warning msg="cleaning up after shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182514319Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.279205    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183465819Z" level=info msg="shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183548319Z" level=warning msg="cleaning up after shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183560019Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185575018Z" level=info msg="ignoring event" container=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185722318Z" level=info msg="ignoring event" container=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280264    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185758518Z" level=info msg="ignoring event" container=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185811118Z" level=info msg="ignoring event" container=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185852418Z" level=info msg="ignoring event" container=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186041918Z" level=info msg="shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186095318Z" level=warning msg="cleaning up after shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186139118Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187552418Z" level=info msg="shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187672518Z" level=warning msg="cleaning up after shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187687018Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987746429Z" level=info msg="shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987797629Z" level=warning msg="cleaning up after shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987859329Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:55 functional-149600 dockerd[1443]: time="2024-07-19T03:51:55.988258129Z" level=info msg="ignoring event" container=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:51:56 functional-149600 dockerd[1449]: time="2024-07-19T03:51:56.011512525Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:51:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.013086308Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.070091705Z" level=info msg="ignoring event" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.070778533Z" level=info msg="shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071068387Z" level=warning msg="cleaning up after shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071124597Z" level=info msg="cleaning up dead shim" namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.147257850Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148746827Z" level=info msg="Daemon shutdown complete"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148999274Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.149087991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Consumed 5.480s CPU time.
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:52:02 functional-149600 dockerd[4315]: time="2024-07-19T03:52:02.207309394Z" level=info msg="Starting up"
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:53:02 functional-149600 dockerd[4315]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0719 03:53:02.280401    3696 command_runner.go:130] > Jul 19 03:53:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	I0719 03:53:02.309205    3696 out.go:177] 
	W0719 03:53:02.311207    3696 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 03:48:58 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.531914350Z" level=info msg="Starting up"
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.534422132Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.535803677Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=684
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.567717825Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594617108Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594655809Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594718511Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594736112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594817914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595026521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595269429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595407134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595431535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595445135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595540038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595881749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598812246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598917149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599162457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599284261Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599462867Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599605372Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625338316Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625549423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625577124Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625596425Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625614725Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625734329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626111642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626552556Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626708661Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626731962Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626749163Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626764763Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626779864Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626807165Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626826665Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626842566Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626857266Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626871767Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626901168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626925168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626942469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626958269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626972470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626986970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627018171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627050773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627067473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627087974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627102874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627118075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627133475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627151576Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627179977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627207478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627223378Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627308681Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627497987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627603491Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627628491Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627642192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627659693Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627677693Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628139708Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628464419Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628586223Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628648825Z" level=info msg="containerd successfully booted in 0.062295s"
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.605880874Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.640734047Z" level=info msg="Loading containers: start."
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.813575066Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.031273218Z" level=info msg="Loading containers: done."
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052569890Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052711603Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174428772Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:00 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174659093Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:31 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.327916124Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.330803748Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332114659Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332413462Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332761765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:49:32 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:49:32 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:49:32 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.396667332Z" level=info msg="Starting up"
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.397798042Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.402462181Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1096
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.432470534Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459514962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459615563Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459667563Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459682563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459912965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459936465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460088967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460343269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460374469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460396969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460425770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460819273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463853798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463983400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464200501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464295002Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464331702Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464352803Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464795906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464850207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464884207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464929707Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464948008Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465078809Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465467012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465770315Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465863515Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465884716Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465898416Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465911416Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465922816Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465936016Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465964116Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465979716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465991216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466002317Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466032417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466048417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466060817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466073817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466093917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466108217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466120718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466132618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466145718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466159818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466170818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466182018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466193718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466207818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466226918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466239919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466250719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466362320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466382920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466470120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466490821Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466502121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466521321Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466787523Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467170726Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467422729Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467502129Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467596330Z" level=info msg="containerd successfully booted in 0.035978s"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.446816884Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.479266357Z" level=info msg="Loading containers: start."
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.611087768Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.727699751Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.826817487Z" level=info msg="Loading containers: done."
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851788197Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851961999Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902179022Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:33 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902385724Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.464303420Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466178836Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466444238Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466617340Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466645140Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:49:43 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:49:44 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:49:44 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:49:44 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.526170370Z" level=info msg="Starting up"
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.527875185Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.529085595Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1449
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.561806771Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.588986100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589119201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589175201Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589189602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589217102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589231002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589372603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589466304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589487004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589498304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589522204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589693506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.592940233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593046334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593164935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593288836Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593325336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593343737Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593655039Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593711040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593798040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593840041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593855841Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593915841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594246644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594583947Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594609647Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594625347Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594640447Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594659648Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594674648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594689748Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594715248Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594831949Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594864649Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594894750Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594912750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594927150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594938550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594949850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594961050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594988850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594999351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595010451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595022151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595034451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595044251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595054151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595064451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595080551Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595100051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595112651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595122752Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595253153Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595360854Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595377754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595405554Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595414854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595426254Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595435454Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595711057Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595836558Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595937958Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595991659Z" level=info msg="containerd successfully booted in 0.035148s"
	Jul 19 03:49:45 functional-149600 dockerd[1443]: time="2024-07-19T03:49:45.571450281Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:49:48 functional-149600 dockerd[1443]: time="2024-07-19T03:49:48.883728000Z" level=info msg="Loading containers: start."
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.006401134Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.127192752Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.218929925Z" level=info msg="Loading containers: done."
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249486583Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249683984Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299922608Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:49 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299991408Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812314634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812468840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.813783594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.814181811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.823808405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826750026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826767127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826866331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899025089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899127893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899277199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899669815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918254477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918562790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.920597373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.921124695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387701734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387801838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387829539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387963045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436646441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436931752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437090859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437275166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539345743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539671255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.540445185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.541481126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550468276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550879792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.551210305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.555850986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.238972986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239834399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239916700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.240127804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589855933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589966535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589987335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.590436642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002502056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002639758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002654558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.003059965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053221935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053490639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053805144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.054875960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794781741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794871142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794886242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794980442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806139221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806918426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807029827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807551631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.625163713Z" level=info msg="ignoring event" container=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.629961233Z" level=info msg="shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631114094Z" level=warning msg="cleaning up after shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631402359Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.674442159Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:50:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.838943371Z" level=info msg="ignoring event" container=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.839886257Z" level=info msg="shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840028839Z" level=warning msg="cleaning up after shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840046637Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303237678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303415569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.304773802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.305273178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593684961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593784156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593803755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593913350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:51:50 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:51:50 functional-149600 dockerd[1443]: time="2024-07-19T03:51:50.861615472Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079285636Z" level=info msg="shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079436436Z" level=warning msg="cleaning up after shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079453436Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.080991335Z" level=info msg="ignoring event" container=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.090996234Z" level=info msg="ignoring event" container=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091838634Z" level=info msg="shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091953834Z" level=warning msg="cleaning up after shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091968234Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.112734230Z" level=info msg="ignoring event" container=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116127330Z" level=info msg="shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116200030Z" level=warning msg="cleaning up after shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116210930Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116537230Z" level=info msg="shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116585530Z" level=warning msg="cleaning up after shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116614030Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116823530Z" level=info msg="ignoring event" container=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116849530Z" level=info msg="ignoring event" container=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116946330Z" level=info msg="ignoring event" container=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116988930Z" level=info msg="ignoring event" container=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122714429Z" level=info msg="shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122848129Z" level=warning msg="cleaning up after shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122861729Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128254128Z" level=info msg="shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128388728Z" level=warning msg="cleaning up after shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128443128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131550327Z" level=info msg="shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131620327Z" level=warning msg="cleaning up after shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131665527Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148015624Z" level=info msg="shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148155124Z" level=warning msg="cleaning up after shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148209624Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182402919Z" level=info msg="shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182503119Z" level=warning msg="cleaning up after shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182514319Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183465819Z" level=info msg="shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183548319Z" level=warning msg="cleaning up after shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183560019Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185575018Z" level=info msg="ignoring event" container=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185722318Z" level=info msg="ignoring event" container=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185758518Z" level=info msg="ignoring event" container=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185811118Z" level=info msg="ignoring event" container=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185852418Z" level=info msg="ignoring event" container=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186041918Z" level=info msg="shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186095318Z" level=warning msg="cleaning up after shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186139118Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187552418Z" level=info msg="shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187672518Z" level=warning msg="cleaning up after shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187687018Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987746429Z" level=info msg="shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987797629Z" level=warning msg="cleaning up after shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987859329Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1443]: time="2024-07-19T03:51:55.988258129Z" level=info msg="ignoring event" container=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:56 functional-149600 dockerd[1449]: time="2024-07-19T03:51:56.011512525Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:51:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.013086308Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.070091705Z" level=info msg="ignoring event" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.070778533Z" level=info msg="shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071068387Z" level=warning msg="cleaning up after shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071124597Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.147257850Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148746827Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148999274Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.149087991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:52:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Consumed 5.480s CPU time.
	Jul 19 03:52:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:52:02 functional-149600 dockerd[4315]: time="2024-07-19T03:52:02.207309394Z" level=info msg="Starting up"
	Jul 19 03:53:02 functional-149600 dockerd[4315]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:53:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0719 03:53:02.313026    3696 out.go:239] * 
	W0719 03:53:02.314501    3696 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 03:53:02.320481    3696 out.go:177] 
	
	
	==> Docker <==
	Jul 19 04:16:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:16:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:16:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Jul 19 04:16:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:16:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:16:08 functional-149600 dockerd[10215]: time="2024-07-19T04:16:08.121334668Z" level=info msg="Starting up"
	Jul 19 04:17:08 functional-149600 dockerd[10215]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:17:08 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:17:08Z" level=error msg="error getting RW layer size for container ID '86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:17:08 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:17:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d'"
	Jul 19 04:17:08 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:17:08Z" level=error msg="error getting RW layer size for container ID 'db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:17:08 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:17:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703'"
	Jul 19 04:17:08 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:17:08Z" level=error msg="error getting RW layer size for container ID '896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:17:08 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:17:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f'"
	Jul 19 04:17:08 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:17:08Z" level=error msg="error getting RW layer size for container ID '73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:17:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:17:08 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:17:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0'"
	Jul 19 04:17:08 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:17:08Z" level=error msg="error getting RW layer size for container ID '2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:17:08 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:17:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f'"
	Jul 19 04:17:08 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:17:08Z" level=error msg="error getting RW layer size for container ID '4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:17:08 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:17:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26'"
	Jul 19 04:17:08 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:17:08Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 19 04:17:08 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:17:08Z" level=error msg="error getting RW layer size for container ID '905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:17:08 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:17:08Z" level=error msg="Set backoffDuration to : 1m0s for container ID '905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2'"
	Jul 19 04:17:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:17:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-19T04:17:10Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.525666] systemd-fstab-generator[1054]: Ignoring "noauto" option for root device
	[  +0.198552] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	[  +0.233106] systemd-fstab-generator[1081]: Ignoring "noauto" option for root device
	[  +2.882289] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.217497] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +0.196783] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[  +0.258312] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +8.589795] systemd-fstab-generator[1435]: Ignoring "noauto" option for root device
	[  +0.109572] kauditd_printk_skb: 202 callbacks suppressed
	[  +5.479934] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.746047] systemd-fstab-generator[1680]: Ignoring "noauto" option for root device
	[  +6.463791] systemd-fstab-generator[1887]: Ignoring "noauto" option for root device
	[  +0.101637] kauditd_printk_skb: 48 callbacks suppressed
	[Jul19 03:50] systemd-fstab-generator[2289]: Ignoring "noauto" option for root device
	[  +0.137056] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.913934] systemd-fstab-generator[2516]: Ignoring "noauto" option for root device
	[  +0.188713] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.060318] hrtimer: interrupt took 3867561 ns
	[  +7.580998] kauditd_printk_skb: 90 callbacks suppressed
	[Jul19 03:51] systemd-fstab-generator[3837]: Ignoring "noauto" option for root device
	[  +0.149840] kauditd_printk_skb: 10 callbacks suppressed
	[  +0.466272] systemd-fstab-generator[3873]: Ignoring "noauto" option for root device
	[  +0.296379] systemd-fstab-generator[3899]: Ignoring "noauto" option for root device
	[  +0.316733] systemd-fstab-generator[3913]: Ignoring "noauto" option for root device
	[  +5.318922] kauditd_printk_skb: 89 callbacks suppressed
	
	
	==> kernel <==
	 04:18:08 up 30 min,  0 users,  load average: 0.00, 0.01, 0.02
	Linux functional-149600 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 19 04:17:59 functional-149600 kubelet[2296]: E0719 04:17:59.758484    2296 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 172.28.160.82:8441: connect: connection refused" event="&Event{ObjectMeta:{coredns-7db6d8ff4d-vgndl.17e380d2651b79e4  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:coredns-7db6d8ff4d-vgndl,UID:ed8aeab5-7b7c-4fc8-a973-5a5039c177ea,APIVersion:v1,ResourceVersion:340,FieldPath:spec.containers{coredns},},Reason:Unhealthy,Message:Liveness probe failed: Get \"http://10.244.0.2:8080/health\": dial tcp 10.244.0.2:8080: connect: no route to host,Source:EventSource{Component:kubelet,Host:functional-149600,},FirstTimestamp:2024-07-19 03:52:03.71344842 +0000 UTC m=+118.766697085,LastTimestamp:2024-07-19 03:52:03.71344842 +0000 UTC m=+118.766697085,Count:1,Type:Warning
,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-149600,}"
	Jul 19 04:18:01 functional-149600 kubelet[2296]: E0719 04:18:01.219611    2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused" interval="7s"
	Jul 19 04:18:03 functional-149600 kubelet[2296]: E0719 04:18:03.825921    2296 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 26m13.450375681s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jul 19 04:18:05 functional-149600 kubelet[2296]: I0719 04:18:05.191517    2296 status_manager.go:853] "Failed to get status for pod" podUID="73b077f74b512a0b97280a590f1f1546" pod="kube-system/kube-apiserver-functional-149600" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-149600\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 04:18:05 functional-149600 kubelet[2296]: E0719 04:18:05.237784    2296 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:18:05 functional-149600 kubelet[2296]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:18:05 functional-149600 kubelet[2296]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:18:05 functional-149600 kubelet[2296]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:18:05 functional-149600 kubelet[2296]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:18:08 functional-149600 kubelet[2296]: E0719 04:18:08.221245    2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused" interval="7s"
	Jul 19 04:18:08 functional-149600 kubelet[2296]: E0719 04:18:08.341988    2296 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 19 04:18:08 functional-149600 kubelet[2296]: E0719 04:18:08.342197    2296 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:18:08 functional-149600 kubelet[2296]: I0719 04:18:08.342318    2296 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:18:08 functional-149600 kubelet[2296]: E0719 04:18:08.342609    2296 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 19 04:18:08 functional-149600 kubelet[2296]: E0719 04:18:08.343072    2296 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:18:08 functional-149600 kubelet[2296]: E0719 04:18:08.343117    2296 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:18:08 functional-149600 kubelet[2296]: E0719 04:18:08.343160    2296 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:18:08 functional-149600 kubelet[2296]: E0719 04:18:08.343183    2296 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:18:08 functional-149600 kubelet[2296]: E0719 04:18:08.343222    2296 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 19 04:18:08 functional-149600 kubelet[2296]: E0719 04:18:08.343373    2296 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:18:08 functional-149600 kubelet[2296]: E0719 04:18:08.343335    2296 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 19 04:18:08 functional-149600 kubelet[2296]: E0719 04:18:08.344117    2296 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:18:08 functional-149600 kubelet[2296]: E0719 04:18:08.346605    2296 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 19 04:18:08 functional-149600 kubelet[2296]: E0719 04:18:08.346678    2296 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 19 04:18:08 functional-149600 kubelet[2296]: E0719 04:18:08.347300    2296 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:15:33.062425    3712 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0719 04:16:07.860490    3712 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:16:07.903596    3712 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:16:07.932934    3712 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:16:07.966166    3712 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:16:07.998019    3712 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:16:08.029459    3712 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:17:08.148526    3712 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:17:08.184840    3712 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-149600 -n functional-149600
E0719 04:18:13.295525    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-149600 -n functional-149600: exit status 2 (12.5579652s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:18:09.205237    4728 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-149600" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (181.02s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (300.71s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-149600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0719 04:20:10.103301    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-149600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 90 (2m47.5445745s)

                                                
                                                
-- stdout --
	* [functional-149600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "functional-149600" primary control-plane node in "functional-149600" cluster
	* Updating the running hyperv "functional-149600" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:18:21.689299   11908 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 03:48:58 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.531914350Z" level=info msg="Starting up"
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.534422132Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.535803677Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=684
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.567717825Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594617108Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594655809Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594718511Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594736112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594817914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595026521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595269429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595407134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595431535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595445135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595540038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595881749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598812246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598917149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599162457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599284261Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599462867Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599605372Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625338316Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625549423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625577124Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625596425Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625614725Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625734329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626111642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626552556Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626708661Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626731962Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626749163Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626764763Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626779864Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626807165Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626826665Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626842566Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626857266Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626871767Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626901168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626925168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626942469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626958269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626972470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626986970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627018171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627050773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627067473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627087974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627102874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627118075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627133475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627151576Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627179977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627207478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627223378Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627308681Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627497987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627603491Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627628491Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627642192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627659693Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627677693Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628139708Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628464419Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628586223Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628648825Z" level=info msg="containerd successfully booted in 0.062295s"
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.605880874Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.640734047Z" level=info msg="Loading containers: start."
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.813575066Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.031273218Z" level=info msg="Loading containers: done."
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052569890Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052711603Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174428772Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:00 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174659093Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:31 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.327916124Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.330803748Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332114659Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332413462Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332761765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:49:32 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:49:32 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:49:32 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.396667332Z" level=info msg="Starting up"
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.397798042Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.402462181Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1096
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.432470534Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459514962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459615563Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459667563Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459682563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459912965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459936465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460088967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460343269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460374469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460396969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460425770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460819273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463853798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463983400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464200501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464295002Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464331702Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464352803Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464795906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464850207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464884207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464929707Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464948008Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465078809Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465467012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465770315Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465863515Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465884716Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465898416Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465911416Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465922816Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465936016Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465964116Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465979716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465991216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466002317Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466032417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466048417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466060817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466073817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466093917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466108217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466120718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466132618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466145718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466159818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466170818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466182018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466193718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466207818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466226918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466239919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466250719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466362320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466382920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466470120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466490821Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466502121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466521321Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466787523Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467170726Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467422729Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467502129Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467596330Z" level=info msg="containerd successfully booted in 0.035978s"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.446816884Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.479266357Z" level=info msg="Loading containers: start."
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.611087768Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.727699751Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.826817487Z" level=info msg="Loading containers: done."
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851788197Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851961999Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902179022Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:33 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902385724Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.464303420Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466178836Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466444238Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466617340Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466645140Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:49:43 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:49:44 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:49:44 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:49:44 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.526170370Z" level=info msg="Starting up"
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.527875185Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.529085595Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1449
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.561806771Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.588986100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589119201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589175201Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589189602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589217102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589231002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589372603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589466304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589487004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589498304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589522204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589693506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.592940233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593046334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593164935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593288836Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593325336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593343737Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593655039Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593711040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593798040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593840041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593855841Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593915841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594246644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594583947Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594609647Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594625347Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594640447Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594659648Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594674648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594689748Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594715248Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594831949Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594864649Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594894750Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594912750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594927150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594938550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594949850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594961050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594988850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594999351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595010451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595022151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595034451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595044251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595054151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595064451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595080551Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595100051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595112651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595122752Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595253153Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595360854Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595377754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595405554Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595414854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595426254Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595435454Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595711057Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595836558Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595937958Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595991659Z" level=info msg="containerd successfully booted in 0.035148s"
	Jul 19 03:49:45 functional-149600 dockerd[1443]: time="2024-07-19T03:49:45.571450281Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:49:48 functional-149600 dockerd[1443]: time="2024-07-19T03:49:48.883728000Z" level=info msg="Loading containers: start."
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.006401134Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.127192752Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.218929925Z" level=info msg="Loading containers: done."
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249486583Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249683984Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299922608Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:49 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299991408Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812314634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812468840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.813783594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.814181811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.823808405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826750026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826767127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826866331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899025089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899127893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899277199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899669815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918254477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918562790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.920597373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.921124695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387701734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387801838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387829539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387963045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436646441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436931752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437090859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437275166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539345743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539671255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.540445185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.541481126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550468276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550879792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.551210305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.555850986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.238972986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239834399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239916700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.240127804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589855933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589966535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589987335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.590436642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002502056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002639758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002654558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.003059965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053221935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053490639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053805144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.054875960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794781741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794871142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794886242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794980442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806139221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806918426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807029827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807551631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.625163713Z" level=info msg="ignoring event" container=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.629961233Z" level=info msg="shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631114094Z" level=warning msg="cleaning up after shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631402359Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.674442159Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:50:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.838943371Z" level=info msg="ignoring event" container=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.839886257Z" level=info msg="shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840028839Z" level=warning msg="cleaning up after shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840046637Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303237678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303415569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.304773802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.305273178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593684961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593784156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593803755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593913350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:51:50 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:51:50 functional-149600 dockerd[1443]: time="2024-07-19T03:51:50.861615472Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079285636Z" level=info msg="shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079436436Z" level=warning msg="cleaning up after shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079453436Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.080991335Z" level=info msg="ignoring event" container=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.090996234Z" level=info msg="ignoring event" container=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091838634Z" level=info msg="shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091953834Z" level=warning msg="cleaning up after shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091968234Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.112734230Z" level=info msg="ignoring event" container=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116127330Z" level=info msg="shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116200030Z" level=warning msg="cleaning up after shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116210930Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116537230Z" level=info msg="shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116585530Z" level=warning msg="cleaning up after shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116614030Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116823530Z" level=info msg="ignoring event" container=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116849530Z" level=info msg="ignoring event" container=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116946330Z" level=info msg="ignoring event" container=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116988930Z" level=info msg="ignoring event" container=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122714429Z" level=info msg="shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122848129Z" level=warning msg="cleaning up after shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122861729Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128254128Z" level=info msg="shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128388728Z" level=warning msg="cleaning up after shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128443128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131550327Z" level=info msg="shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131620327Z" level=warning msg="cleaning up after shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131665527Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148015624Z" level=info msg="shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148155124Z" level=warning msg="cleaning up after shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148209624Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182402919Z" level=info msg="shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182503119Z" level=warning msg="cleaning up after shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182514319Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183465819Z" level=info msg="shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183548319Z" level=warning msg="cleaning up after shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183560019Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185575018Z" level=info msg="ignoring event" container=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185722318Z" level=info msg="ignoring event" container=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185758518Z" level=info msg="ignoring event" container=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185811118Z" level=info msg="ignoring event" container=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185852418Z" level=info msg="ignoring event" container=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186041918Z" level=info msg="shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186095318Z" level=warning msg="cleaning up after shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186139118Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187552418Z" level=info msg="shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187672518Z" level=warning msg="cleaning up after shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187687018Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987746429Z" level=info msg="shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987797629Z" level=warning msg="cleaning up after shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987859329Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1443]: time="2024-07-19T03:51:55.988258129Z" level=info msg="ignoring event" container=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:56 functional-149600 dockerd[1449]: time="2024-07-19T03:51:56.011512525Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:51:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.013086308Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.070091705Z" level=info msg="ignoring event" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.070778533Z" level=info msg="shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071068387Z" level=warning msg="cleaning up after shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071124597Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.147257850Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148746827Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148999274Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.149087991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:52:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Consumed 5.480s CPU time.
	Jul 19 03:52:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:52:02 functional-149600 dockerd[4315]: time="2024-07-19T03:52:02.207309394Z" level=info msg="Starting up"
	Jul 19 03:53:02 functional-149600 dockerd[4315]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:53:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Jul 19 03:53:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:53:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:53:02 functional-149600 dockerd[4517]: time="2024-07-19T03:53:02.427497928Z" level=info msg="Starting up"
	Jul 19 03:54:02 functional-149600 dockerd[4517]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:54:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:54:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:54:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:54:02 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jul 19 03:54:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:54:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:54:02 functional-149600 dockerd[4794]: time="2024-07-19T03:54:02.625522087Z" level=info msg="Starting up"
	Jul 19 03:55:02 functional-149600 dockerd[4794]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:55:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:55:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:55:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:55:02 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Jul 19 03:55:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:55:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:55:02 functional-149600 dockerd[5024]: time="2024-07-19T03:55:02.867963022Z" level=info msg="Starting up"
	Jul 19 03:56:02 functional-149600 dockerd[5024]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:56:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:56:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:56:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:56:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Jul 19 03:56:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:56:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:56:03 functional-149600 dockerd[5250]: time="2024-07-19T03:56:03.114424888Z" level=info msg="Starting up"
	Jul 19 03:57:03 functional-149600 dockerd[5250]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:57:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:57:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:57:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:57:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Jul 19 03:57:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:57:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:57:03 functional-149600 dockerd[5584]: time="2024-07-19T03:57:03.385021046Z" level=info msg="Starting up"
	Jul 19 03:58:03 functional-149600 dockerd[5584]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:58:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:58:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:58:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:58:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Jul 19 03:58:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:58:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:58:03 functional-149600 dockerd[5803]: time="2024-07-19T03:58:03.585932078Z" level=info msg="Starting up"
	Jul 19 03:59:03 functional-149600 dockerd[5803]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:59:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:59:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:59:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:59:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Jul 19 03:59:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:59:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:59:03 functional-149600 dockerd[6023]: time="2024-07-19T03:59:03.812709134Z" level=info msg="Starting up"
	Jul 19 04:00:03 functional-149600 dockerd[6023]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:00:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:00:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:00:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:00:04 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
	Jul 19 04:00:04 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:00:04 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:00:04 functional-149600 dockerd[6281]: time="2024-07-19T04:00:04.125100395Z" level=info msg="Starting up"
	Jul 19 04:01:04 functional-149600 dockerd[6281]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:01:04 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:01:04 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:01:04 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:01:04 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
	Jul 19 04:01:04 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:01:04 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:01:04 functional-149600 dockerd[6502]: time="2024-07-19T04:01:04.384065143Z" level=info msg="Starting up"
	Jul 19 04:02:04 functional-149600 dockerd[6502]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:02:04 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:02:04 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:02:04 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:02:04 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
	Jul 19 04:02:04 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:02:04 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:02:04 functional-149600 dockerd[6727]: time="2024-07-19T04:02:04.629921832Z" level=info msg="Starting up"
	Jul 19 04:03:04 functional-149600 dockerd[6727]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:03:04 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:03:04 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:03:04 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:03:04 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Jul 19 04:03:04 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:03:04 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:03:04 functional-149600 dockerd[6945]: time="2024-07-19T04:03:04.881594773Z" level=info msg="Starting up"
	Jul 19 04:04:04 functional-149600 dockerd[6945]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:04:04 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:04:04 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:04:04 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:04:05 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Jul 19 04:04:05 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:04:05 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:04:05 functional-149600 dockerd[7168]: time="2024-07-19T04:04:05.123312469Z" level=info msg="Starting up"
	Jul 19 04:05:05 functional-149600 dockerd[7168]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:05:05 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:05:05 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:05:05 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:05:05 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
	Jul 19 04:05:05 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:05:05 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:05:05 functional-149600 dockerd[7390]: time="2024-07-19T04:05:05.382469694Z" level=info msg="Starting up"
	Jul 19 04:06:05 functional-149600 dockerd[7390]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:06:05 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:06:05 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:06:05 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:06:05 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Jul 19 04:06:05 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:06:05 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:06:05 functional-149600 dockerd[7633]: time="2024-07-19T04:06:05.593228245Z" level=info msg="Starting up"
	Jul 19 04:07:05 functional-149600 dockerd[7633]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:07:05 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:07:05 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:07:05 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:07:05 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
	Jul 19 04:07:05 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:07:05 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:07:05 functional-149600 dockerd[7873]: time="2024-07-19T04:07:05.880412514Z" level=info msg="Starting up"
	Jul 19 04:08:05 functional-149600 dockerd[7873]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:08:05 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:08:05 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:08:05 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:08:06 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
	Jul 19 04:08:06 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:08:06 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:08:06 functional-149600 dockerd[8117]: time="2024-07-19T04:08:06.127986862Z" level=info msg="Starting up"
	Jul 19 04:09:06 functional-149600 dockerd[8117]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:09:06 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:09:06 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:09:06 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:09:06 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Jul 19 04:09:06 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:09:06 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:09:06 functional-149600 dockerd[8352]: time="2024-07-19T04:09:06.371958374Z" level=info msg="Starting up"
	Jul 19 04:10:06 functional-149600 dockerd[8352]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:10:06 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:10:06 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:10:06 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:10:06 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Jul 19 04:10:06 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:10:06 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:10:06 functional-149600 dockerd[8667]: time="2024-07-19T04:10:06.620432494Z" level=info msg="Starting up"
	Jul 19 04:11:06 functional-149600 dockerd[8667]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:11:06 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:11:06 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:11:06 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:11:06 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
	Jul 19 04:11:06 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:11:06 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:11:06 functional-149600 dockerd[8889]: time="2024-07-19T04:11:06.842404443Z" level=info msg="Starting up"
	Jul 19 04:12:06 functional-149600 dockerd[8889]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:12:06 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:12:06 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:12:06 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:12:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Jul 19 04:12:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:12:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:12:07 functional-149600 dockerd[9109]: time="2024-07-19T04:12:07.102473619Z" level=info msg="Starting up"
	Jul 19 04:13:07 functional-149600 dockerd[9109]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:13:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:13:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:13:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:13:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Jul 19 04:13:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:13:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:13:07 functional-149600 dockerd[9440]: time="2024-07-19T04:13:07.376165478Z" level=info msg="Starting up"
	Jul 19 04:14:07 functional-149600 dockerd[9440]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:14:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:14:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:14:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:14:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Jul 19 04:14:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:14:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:14:07 functional-149600 dockerd[9662]: time="2024-07-19T04:14:07.590302364Z" level=info msg="Starting up"
	Jul 19 04:15:07 functional-149600 dockerd[9662]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:15:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:15:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:15:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:15:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Jul 19 04:15:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:15:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:15:07 functional-149600 dockerd[9879]: time="2024-07-19T04:15:07.829795571Z" level=info msg="Starting up"
	Jul 19 04:16:07 functional-149600 dockerd[9879]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:16:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:16:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:16:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:16:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Jul 19 04:16:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:16:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:16:08 functional-149600 dockerd[10215]: time="2024-07-19T04:16:08.121334668Z" level=info msg="Starting up"
	Jul 19 04:17:08 functional-149600 dockerd[10215]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:17:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:17:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:17:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:17:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
	Jul 19 04:17:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:17:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:17:08 functional-149600 dockerd[10435]: time="2024-07-19T04:17:08.312026488Z" level=info msg="Starting up"
	Jul 19 04:18:08 functional-149600 dockerd[10435]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:18:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:18:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:18:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:18:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 26.
	Jul 19 04:18:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:18:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:18:08 functional-149600 dockerd[10658]: time="2024-07-19T04:18:08.567478720Z" level=info msg="Starting up"
	Jul 19 04:19:08 functional-149600 dockerd[10658]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:19:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:19:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:19:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:19:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 27.
	Jul 19 04:19:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:19:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:19:08 functional-149600 dockerd[11028]: time="2024-07-19T04:19:08.881713903Z" level=info msg="Starting up"
	Jul 19 04:19:41 functional-149600 dockerd[11028]: time="2024-07-19T04:19:41.104825080Z" level=info msg="Processing signal 'terminated'"
	Jul 19 04:20:08 functional-149600 dockerd[11028]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:20:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:20:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:20:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:20:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:20:08 functional-149600 dockerd[11475]: time="2024-07-19T04:20:08.959849556Z" level=info msg="Starting up"
	Jul 19 04:21:08 functional-149600 dockerd[11475]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:21:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:21:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:21:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-149600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 90
functional_test.go:757: restart took 2m47.7005437s for "functional-149600" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-149600 -n functional-149600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-149600 -n functional-149600: exit status 2 (12.434987s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:21:09.421024    4360 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ExtraConfig]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-149600 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-149600 logs -n 25: (1m48.1223238s)
helpers_test.go:252: TestFunctional/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| unpause | nospam-907600 --log_dir                                                  | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-907600 --log_dir                                                  | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-907600 --log_dir                                                  | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                                  | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                                  | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                                  | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| delete  | -p nospam-907600                                                         | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	| start   | -p functional-149600                                                     | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:50 UTC |
	|         | --memory=4000                                                            |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |         |                     |                     |
	| start   | -p functional-149600                                                     | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:50 UTC |                     |
	|         | --alsologtostderr -v=8                                                   |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache add                                              | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:59 UTC | 19 Jul 24 04:01 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache add                                              | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:01 UTC | 19 Jul 24 04:03 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache add                                              | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:03 UTC | 19 Jul 24 04:05 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache add                                              | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:05 UTC | 19 Jul 24 04:06 UTC |
	|         | minikube-local-cache-test:functional-149600                              |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache delete                                           | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC | 19 Jul 24 04:06 UTC |
	|         | minikube-local-cache-test:functional-149600                              |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC | 19 Jul 24 04:06 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | list                                                                     | minikube          | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC | 19 Jul 24 04:06 UTC |
	| ssh     | functional-149600 ssh sudo                                               | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC |                     |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-149600                                                        | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC |                     |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-149600 ssh                                                    | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:07 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache reload                                           | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:07 UTC | 19 Jul 24 04:09 UTC |
	| ssh     | functional-149600 ssh                                                    | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:09 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:09 UTC | 19 Jul 24 04:09 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:09 UTC | 19 Jul 24 04:09 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-149600 kubectl --                                             | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:12 UTC |                     |
	|         | --context functional-149600                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-149600                                                     | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:18 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 04:18:21
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 04:18:21.766878   11908 out.go:291] Setting OutFile to fd 508 ...
	I0719 04:18:21.767672   11908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:18:21.767672   11908 out.go:304] Setting ErrFile to fd 628...
	I0719 04:18:21.767704   11908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:18:21.802747   11908 out.go:298] Setting JSON to false
	I0719 04:18:21.806931   11908 start.go:129] hostinfo: {"hostname":"minikube6","uptime":21727,"bootTime":1721340973,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0719 04:18:21.807048   11908 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 04:18:21.811368   11908 out.go:177] * [functional-149600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 04:18:21.814314   11908 notify.go:220] Checking for updates...
	I0719 04:18:21.815240   11908 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 04:18:21.817700   11908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 04:18:21.821305   11908 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0719 04:18:21.823619   11908 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 04:18:21.827233   11908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:18:21.830726   11908 config.go:182] Loaded profile config "functional-149600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:18:21.830936   11908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:18:27.199268   11908 out.go:177] * Using the hyperv driver based on existing profile
	I0719 04:18:27.203184   11908 start.go:297] selected driver: hyperv
	I0719 04:18:27.203184   11908 start.go:901] validating driver "hyperv" against &{Name:functional-149600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-149600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.160.82 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:18:27.203184   11908 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 04:18:27.252525   11908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:18:27.252525   11908 cni.go:84] Creating CNI manager for ""
	I0719 04:18:27.252525   11908 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 04:18:27.252525   11908 start.go:340] cluster config:
	{Name:functional-149600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-149600 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.160.82 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:18:27.253102   11908 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 04:18:27.258660   11908 out.go:177] * Starting "functional-149600" primary control-plane node in "functional-149600" cluster
	I0719 04:18:27.260254   11908 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 04:18:27.261246   11908 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 04:18:27.261246   11908 cache.go:56] Caching tarball of preloaded images
	I0719 04:18:27.261246   11908 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 04:18:27.261246   11908 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 04:18:27.261246   11908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-149600\config.json ...
	I0719 04:18:27.263301   11908 start.go:360] acquireMachinesLock for functional-149600: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 04:18:27.263301   11908 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-149600"
	I0719 04:18:27.263301   11908 start.go:96] Skipping create...Using existing machine configuration
	I0719 04:18:27.263301   11908 fix.go:54] fixHost starting: 
	I0719 04:18:27.264677   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:30.104551   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:30.104551   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:30.104990   11908 fix.go:112] recreateIfNeeded on functional-149600: state=Running err=<nil>
	W0719 04:18:30.104990   11908 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 04:18:30.108813   11908 out.go:177] * Updating the running hyperv "functional-149600" VM ...
	I0719 04:18:30.112780   11908 machine.go:94] provisionDockerMachine start ...
	I0719 04:18:30.112780   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:32.281432   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:32.282235   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:32.282235   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:18:34.875102   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:18:34.875610   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:34.880845   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:18:34.881446   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:18:34.881446   11908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 04:18:35.021234   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-149600
	
	I0719 04:18:35.021324   11908 buildroot.go:166] provisioning hostname "functional-149600"
	I0719 04:18:35.021324   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:37.170899   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:37.170899   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:37.171700   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:18:39.728642   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:18:39.728642   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:39.734540   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:18:39.735114   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:18:39.735114   11908 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-149600 && echo "functional-149600" | sudo tee /etc/hostname
	I0719 04:18:39.893752   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-149600
	
	I0719 04:18:39.893752   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:42.020647   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:42.021626   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:42.021626   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:18:44.602600   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:18:44.602600   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:44.610065   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:18:44.610065   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:18:44.610065   11908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-149600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-149600/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-149600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:18:44.753558   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:18:44.753558   11908 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 04:18:44.753558   11908 buildroot.go:174] setting up certificates
	I0719 04:18:44.753558   11908 provision.go:84] configureAuth start
	I0719 04:18:44.753558   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:46.923705   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:46.923915   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:46.923915   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:18:49.456995   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:18:49.457146   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:49.457146   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:51.630822   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:51.630822   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:51.631464   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:18:54.211617   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:18:54.211617   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:54.211905   11908 provision.go:143] copyHostCerts
	I0719 04:18:54.222331   11908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0719 04:18:54.222331   11908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0719 04:18:54.223238   11908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 04:18:54.233187   11908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0719 04:18:54.233187   11908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0719 04:18:54.233187   11908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 04:18:54.242612   11908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0719 04:18:54.242612   11908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0719 04:18:54.242944   11908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 04:18:54.244582   11908 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-149600 san=[127.0.0.1 172.28.160.82 functional-149600 localhost minikube]
	I0719 04:18:54.390527   11908 provision.go:177] copyRemoteCerts
	I0719 04:18:54.401534   11908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:18:54.401534   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:56.573340   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:56.573340   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:56.573340   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:18:59.164667   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:18:59.164667   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:59.166132   11908 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 04:18:59.268712   11908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8670097s)
	I0719 04:18:59.269401   11908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 04:18:59.315850   11908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 04:18:59.360883   11908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 04:18:59.404491   11908 provision.go:87] duration metric: took 14.650763s to configureAuth
	I0719 04:18:59.404491   11908 buildroot.go:189] setting minikube options for container-runtime
	I0719 04:18:59.405435   11908 config.go:182] Loaded profile config "functional-149600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:18:59.405475   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:01.593936   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:01.593936   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:01.594118   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:04.230442   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:04.230442   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:04.239294   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:19:04.239760   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:19:04.239760   11908 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 04:19:04.380052   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 04:19:04.380052   11908 buildroot.go:70] root file system type: tmpfs
	I0719 04:19:04.380261   11908 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 04:19:04.380347   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:06.539303   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:06.539515   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:06.539515   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:09.127043   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:09.127043   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:09.133308   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:19:09.133448   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:19:09.133448   11908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 04:19:09.306386   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 04:19:09.306918   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:11.508256   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:11.508256   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:11.509277   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:14.068121   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:14.068121   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:14.074438   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:19:14.075220   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:19:14.075220   11908 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 04:19:14.221726   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:19:14.221726   11908 machine.go:97] duration metric: took 44.1084347s to provisionDockerMachine
	I0719 04:19:14.221726   11908 start.go:293] postStartSetup for "functional-149600" (driver="hyperv")
	I0719 04:19:14.221726   11908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:19:14.235570   11908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:19:14.235570   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:16.392176   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:16.393208   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:16.393208   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:18.931070   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:18.931973   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:18.932493   11908 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 04:19:19.038434   11908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8028089s)
	I0719 04:19:19.053175   11908 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:19:19.060791   11908 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 04:19:19.060791   11908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 04:19:19.060791   11908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 04:19:19.061625   11908 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
	I0719 04:19:19.064733   11908 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts -> hosts in /etc/test/nested/copy/9604
	I0719 04:19:19.078488   11908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9604
	I0719 04:19:19.096657   11908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
	I0719 04:19:19.141481   11908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts --> /etc/test/nested/copy/9604/hosts (40 bytes)
	I0719 04:19:19.186154   11908 start.go:296] duration metric: took 4.9643708s for postStartSetup
	I0719 04:19:19.186154   11908 fix.go:56] duration metric: took 51.9222508s for fixHost
	I0719 04:19:19.186154   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:21.337933   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:21.337933   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:21.337933   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:23.870420   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:23.870420   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:23.875775   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:19:23.876403   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:19:23.876403   11908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 04:19:24.012488   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721362764.016863919
	
	I0719 04:19:24.012488   11908 fix.go:216] guest clock: 1721362764.016863919
	I0719 04:19:24.012488   11908 fix.go:229] Guest: 2024-07-19 04:19:24.016863919 +0000 UTC Remote: 2024-07-19 04:19:19.1861548 +0000 UTC m=+57.580185601 (delta=4.830709119s)
	I0719 04:19:24.012488   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:26.182442   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:26.182676   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:26.182676   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:28.790985   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:28.790985   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:28.797512   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:19:28.798275   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:19:28.798275   11908 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721362764
	I0719 04:19:28.954624   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 04:19:24 UTC 2024
	
	I0719 04:19:28.954624   11908 fix.go:236] clock set: Fri Jul 19 04:19:24 UTC 2024
	 (err=<nil>)
	I0719 04:19:28.954624   11908 start.go:83] releasing machines lock for "functional-149600", held for 1m1.6906073s
	I0719 04:19:28.954952   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:31.171228   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:31.171393   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:31.171393   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:34.042286   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:34.042286   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:34.047433   11908 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 04:19:34.047433   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:34.059846   11908 ssh_runner.go:195] Run: cat /version.json
	I0719 04:19:34.059846   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:36.423615   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:36.423615   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:36.423615   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:36.523606   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:36.523628   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:36.523684   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:39.126747   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:39.126747   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:39.127737   11908 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 04:19:39.227169   11908 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1795571s)
	W0719 04:19:39.227169   11908 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 04:19:39.258833   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:39.258833   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:39.259777   11908 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	W0719 04:19:39.339726   11908 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 04:19:39.339839   11908 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 04:19:39.353457   11908 ssh_runner.go:235] Completed: cat /version.json: (5.2935496s)
	I0719 04:19:39.364534   11908 ssh_runner.go:195] Run: systemctl --version
	I0719 04:19:39.384482   11908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 04:19:39.392154   11908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 04:19:39.403112   11908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:19:39.419839   11908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 04:19:39.419839   11908 start.go:495] detecting cgroup driver to use...
	I0719 04:19:39.420108   11908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:19:39.467456   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 04:19:39.499239   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 04:19:39.522220   11908 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 04:19:39.533043   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 04:19:39.562936   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 04:19:39.594192   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 04:19:39.624160   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 04:19:39.654835   11908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:19:39.684342   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 04:19:39.714405   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 04:19:39.744483   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 04:19:39.773149   11908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:19:39.804037   11908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:19:39.833349   11908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:19:40.058814   11908 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 04:19:40.099575   11908 start.go:495] detecting cgroup driver to use...
	I0719 04:19:40.111657   11908 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 04:19:40.147724   11908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:19:40.182021   11908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:19:40.219208   11908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:19:40.252518   11908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 04:19:40.274665   11908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:19:40.317754   11908 ssh_runner.go:195] Run: which cri-dockerd
	I0719 04:19:40.334468   11908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 04:19:40.352225   11908 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 04:19:40.391447   11908 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 04:19:40.611469   11908 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 04:19:40.826485   11908 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 04:19:40.826637   11908 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 04:19:40.870608   11908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:19:41.079339   11908 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 04:21:08.989750   11908 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m27.9093148s)
	I0719 04:21:09.002113   11908 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0719 04:21:09.085419   11908 out.go:177] 
	W0719 04:21:09.088414   11908 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 03:48:58 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.531914350Z" level=info msg="Starting up"
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.534422132Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.535803677Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=684
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.567717825Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594617108Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594655809Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594718511Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594736112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594817914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595026521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595269429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595407134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595431535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595445135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595540038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595881749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598812246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598917149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599162457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599284261Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599462867Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599605372Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625338316Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625549423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625577124Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625596425Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625614725Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625734329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626111642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626552556Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626708661Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626731962Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626749163Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626764763Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626779864Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626807165Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626826665Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626842566Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626857266Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626871767Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626901168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626925168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626942469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626958269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626972470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626986970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627018171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627050773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627067473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627087974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627102874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627118075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627133475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627151576Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627179977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627207478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627223378Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627308681Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627497987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627603491Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627628491Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627642192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627659693Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627677693Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628139708Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628464419Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628586223Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628648825Z" level=info msg="containerd successfully booted in 0.062295s"
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.605880874Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.640734047Z" level=info msg="Loading containers: start."
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.813575066Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.031273218Z" level=info msg="Loading containers: done."
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052569890Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052711603Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174428772Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:00 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174659093Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:31 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.327916124Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.330803748Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332114659Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332413462Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332761765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:49:32 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:49:32 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:49:32 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.396667332Z" level=info msg="Starting up"
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.397798042Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.402462181Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1096
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.432470534Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459514962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459615563Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459667563Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459682563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459912965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459936465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460088967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460343269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460374469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460396969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460425770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460819273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463853798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463983400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464200501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464295002Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464331702Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464352803Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464795906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464850207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464884207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464929707Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464948008Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465078809Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465467012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465770315Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465863515Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465884716Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465898416Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465911416Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465922816Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465936016Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465964116Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465979716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465991216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466002317Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466032417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466048417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466060817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466073817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466093917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466108217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466120718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466132618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466145718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466159818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466170818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466182018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466193718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466207818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466226918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466239919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466250719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466362320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466382920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466470120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466490821Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466502121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466521321Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466787523Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467170726Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467422729Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467502129Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467596330Z" level=info msg="containerd successfully booted in 0.035978s"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.446816884Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.479266357Z" level=info msg="Loading containers: start."
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.611087768Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.727699751Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.826817487Z" level=info msg="Loading containers: done."
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851788197Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851961999Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902179022Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:33 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902385724Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.464303420Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466178836Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466444238Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466617340Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466645140Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:49:43 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:49:44 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:49:44 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:49:44 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.526170370Z" level=info msg="Starting up"
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.527875185Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.529085595Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1449
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.561806771Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.588986100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589119201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589175201Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589189602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589217102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589231002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589372603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589466304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589487004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589498304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589522204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589693506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.592940233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593046334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593164935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593288836Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593325336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593343737Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593655039Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593711040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593798040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593840041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593855841Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593915841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594246644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594583947Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594609647Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594625347Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594640447Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594659648Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594674648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594689748Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594715248Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594831949Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594864649Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594894750Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594912750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594927150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594938550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594949850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594961050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594988850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594999351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595010451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595022151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595034451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595044251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595054151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595064451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595080551Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595100051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595112651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595122752Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595253153Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595360854Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595377754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595405554Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595414854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595426254Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595435454Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595711057Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595836558Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595937958Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595991659Z" level=info msg="containerd successfully booted in 0.035148s"
	Jul 19 03:49:45 functional-149600 dockerd[1443]: time="2024-07-19T03:49:45.571450281Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:49:48 functional-149600 dockerd[1443]: time="2024-07-19T03:49:48.883728000Z" level=info msg="Loading containers: start."
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.006401134Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.127192752Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.218929925Z" level=info msg="Loading containers: done."
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249486583Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249683984Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299922608Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:49 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299991408Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812314634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812468840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.813783594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.814181811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.823808405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826750026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826767127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826866331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899025089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899127893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899277199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899669815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918254477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918562790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.920597373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.921124695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387701734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387801838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387829539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387963045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436646441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436931752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437090859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437275166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539345743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539671255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.540445185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.541481126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550468276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550879792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.551210305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.555850986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.238972986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239834399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239916700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.240127804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589855933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589966535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589987335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.590436642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002502056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002639758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002654558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.003059965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053221935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053490639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053805144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.054875960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794781741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794871142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794886242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794980442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806139221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806918426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807029827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807551631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.625163713Z" level=info msg="ignoring event" container=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.629961233Z" level=info msg="shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631114094Z" level=warning msg="cleaning up after shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631402359Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.674442159Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:50:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.838943371Z" level=info msg="ignoring event" container=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.839886257Z" level=info msg="shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840028839Z" level=warning msg="cleaning up after shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840046637Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303237678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303415569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.304773802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.305273178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593684961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593784156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593803755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593913350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:51:50 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:51:50 functional-149600 dockerd[1443]: time="2024-07-19T03:51:50.861615472Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079285636Z" level=info msg="shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079436436Z" level=warning msg="cleaning up after shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079453436Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.080991335Z" level=info msg="ignoring event" container=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.090996234Z" level=info msg="ignoring event" container=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091838634Z" level=info msg="shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091953834Z" level=warning msg="cleaning up after shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091968234Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.112734230Z" level=info msg="ignoring event" container=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116127330Z" level=info msg="shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116200030Z" level=warning msg="cleaning up after shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116210930Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116537230Z" level=info msg="shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116585530Z" level=warning msg="cleaning up after shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116614030Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116823530Z" level=info msg="ignoring event" container=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116849530Z" level=info msg="ignoring event" container=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116946330Z" level=info msg="ignoring event" container=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116988930Z" level=info msg="ignoring event" container=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122714429Z" level=info msg="shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122848129Z" level=warning msg="cleaning up after shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122861729Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128254128Z" level=info msg="shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128388728Z" level=warning msg="cleaning up after shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128443128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131550327Z" level=info msg="shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131620327Z" level=warning msg="cleaning up after shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131665527Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148015624Z" level=info msg="shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148155124Z" level=warning msg="cleaning up after shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148209624Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182402919Z" level=info msg="shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182503119Z" level=warning msg="cleaning up after shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182514319Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183465819Z" level=info msg="shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183548319Z" level=warning msg="cleaning up after shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183560019Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185575018Z" level=info msg="ignoring event" container=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185722318Z" level=info msg="ignoring event" container=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185758518Z" level=info msg="ignoring event" container=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185811118Z" level=info msg="ignoring event" container=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185852418Z" level=info msg="ignoring event" container=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186041918Z" level=info msg="shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186095318Z" level=warning msg="cleaning up after shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186139118Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187552418Z" level=info msg="shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187672518Z" level=warning msg="cleaning up after shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187687018Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987746429Z" level=info msg="shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987797629Z" level=warning msg="cleaning up after shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987859329Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1443]: time="2024-07-19T03:51:55.988258129Z" level=info msg="ignoring event" container=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:56 functional-149600 dockerd[1449]: time="2024-07-19T03:51:56.011512525Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:51:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.013086308Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.070091705Z" level=info msg="ignoring event" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.070778533Z" level=info msg="shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071068387Z" level=warning msg="cleaning up after shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071124597Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.147257850Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148746827Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148999274Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.149087991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:52:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Consumed 5.480s CPU time.
	Jul 19 03:52:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:52:02 functional-149600 dockerd[4315]: time="2024-07-19T03:52:02.207309394Z" level=info msg="Starting up"
	Jul 19 03:53:02 functional-149600 dockerd[4315]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:53:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Jul 19 03:53:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:53:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:53:02 functional-149600 dockerd[4517]: time="2024-07-19T03:53:02.427497928Z" level=info msg="Starting up"
	Jul 19 03:54:02 functional-149600 dockerd[4517]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:54:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:54:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:54:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:54:02 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jul 19 03:54:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:54:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:54:02 functional-149600 dockerd[4794]: time="2024-07-19T03:54:02.625522087Z" level=info msg="Starting up"
	Jul 19 03:55:02 functional-149600 dockerd[4794]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:55:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:55:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:55:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:55:02 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Jul 19 03:55:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:55:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:55:02 functional-149600 dockerd[5024]: time="2024-07-19T03:55:02.867963022Z" level=info msg="Starting up"
	Jul 19 03:56:02 functional-149600 dockerd[5024]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:56:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:56:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:56:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:56:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Jul 19 03:56:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:56:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:56:03 functional-149600 dockerd[5250]: time="2024-07-19T03:56:03.114424888Z" level=info msg="Starting up"
	Jul 19 03:57:03 functional-149600 dockerd[5250]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:57:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:57:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:57:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:57:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Jul 19 03:57:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:57:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:57:03 functional-149600 dockerd[5584]: time="2024-07-19T03:57:03.385021046Z" level=info msg="Starting up"
	Jul 19 03:58:03 functional-149600 dockerd[5584]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:58:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:58:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:58:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:58:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Jul 19 03:58:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:58:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:58:03 functional-149600 dockerd[5803]: time="2024-07-19T03:58:03.585932078Z" level=info msg="Starting up"
	Jul 19 03:59:03 functional-149600 dockerd[5803]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:59:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:59:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:59:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:59:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Jul 19 03:59:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:59:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:59:03 functional-149600 dockerd[6023]: time="2024-07-19T03:59:03.812709134Z" level=info msg="Starting up"
	Jul 19 04:00:03 functional-149600 dockerd[6023]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:00:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:00:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:00:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:00:04 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
	Jul 19 04:00:04 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:00:04 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:00:04 functional-149600 dockerd[6281]: time="2024-07-19T04:00:04.125100395Z" level=info msg="Starting up"
	Jul 19 04:01:04 functional-149600 dockerd[6281]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:01:04 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:01:04 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:01:04 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:01:04 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
	Jul 19 04:01:04 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:01:04 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:01:04 functional-149600 dockerd[6502]: time="2024-07-19T04:01:04.384065143Z" level=info msg="Starting up"
	Jul 19 04:02:04 functional-149600 dockerd[6502]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:02:04 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:02:04 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:02:04 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:02:04 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
	Jul 19 04:02:04 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:02:04 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:02:04 functional-149600 dockerd[6727]: time="2024-07-19T04:02:04.629921832Z" level=info msg="Starting up"
	Jul 19 04:03:04 functional-149600 dockerd[6727]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:03:04 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:03:04 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:03:04 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:03:04 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Jul 19 04:03:04 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:03:04 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:03:04 functional-149600 dockerd[6945]: time="2024-07-19T04:03:04.881594773Z" level=info msg="Starting up"
	Jul 19 04:04:04 functional-149600 dockerd[6945]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:04:04 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:04:04 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:04:04 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:04:05 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Jul 19 04:04:05 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:04:05 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:04:05 functional-149600 dockerd[7168]: time="2024-07-19T04:04:05.123312469Z" level=info msg="Starting up"
	Jul 19 04:05:05 functional-149600 dockerd[7168]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:05:05 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:05:05 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:05:05 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:05:05 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
	Jul 19 04:05:05 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:05:05 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:05:05 functional-149600 dockerd[7390]: time="2024-07-19T04:05:05.382469694Z" level=info msg="Starting up"
	Jul 19 04:06:05 functional-149600 dockerd[7390]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:06:05 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:06:05 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:06:05 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:06:05 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Jul 19 04:06:05 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:06:05 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:06:05 functional-149600 dockerd[7633]: time="2024-07-19T04:06:05.593228245Z" level=info msg="Starting up"
	Jul 19 04:07:05 functional-149600 dockerd[7633]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:07:05 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:07:05 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:07:05 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:07:05 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
	Jul 19 04:07:05 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:07:05 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:07:05 functional-149600 dockerd[7873]: time="2024-07-19T04:07:05.880412514Z" level=info msg="Starting up"
	Jul 19 04:08:05 functional-149600 dockerd[7873]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:08:05 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:08:05 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:08:05 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:08:06 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
	Jul 19 04:08:06 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:08:06 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:08:06 functional-149600 dockerd[8117]: time="2024-07-19T04:08:06.127986862Z" level=info msg="Starting up"
	Jul 19 04:09:06 functional-149600 dockerd[8117]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:09:06 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:09:06 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:09:06 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:09:06 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Jul 19 04:09:06 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:09:06 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:09:06 functional-149600 dockerd[8352]: time="2024-07-19T04:09:06.371958374Z" level=info msg="Starting up"
	Jul 19 04:10:06 functional-149600 dockerd[8352]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:10:06 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:10:06 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:10:06 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:10:06 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Jul 19 04:10:06 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:10:06 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:10:06 functional-149600 dockerd[8667]: time="2024-07-19T04:10:06.620432494Z" level=info msg="Starting up"
	Jul 19 04:11:06 functional-149600 dockerd[8667]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:11:06 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:11:06 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:11:06 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:11:06 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
	Jul 19 04:11:06 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:11:06 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:11:06 functional-149600 dockerd[8889]: time="2024-07-19T04:11:06.842404443Z" level=info msg="Starting up"
	Jul 19 04:12:06 functional-149600 dockerd[8889]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:12:06 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:12:06 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:12:06 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:12:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Jul 19 04:12:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:12:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:12:07 functional-149600 dockerd[9109]: time="2024-07-19T04:12:07.102473619Z" level=info msg="Starting up"
	Jul 19 04:13:07 functional-149600 dockerd[9109]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:13:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:13:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:13:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:13:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Jul 19 04:13:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:13:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:13:07 functional-149600 dockerd[9440]: time="2024-07-19T04:13:07.376165478Z" level=info msg="Starting up"
	Jul 19 04:14:07 functional-149600 dockerd[9440]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:14:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:14:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:14:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:14:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Jul 19 04:14:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:14:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:14:07 functional-149600 dockerd[9662]: time="2024-07-19T04:14:07.590302364Z" level=info msg="Starting up"
	Jul 19 04:15:07 functional-149600 dockerd[9662]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:15:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:15:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:15:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:15:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Jul 19 04:15:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:15:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:15:07 functional-149600 dockerd[9879]: time="2024-07-19T04:15:07.829795571Z" level=info msg="Starting up"
	Jul 19 04:16:07 functional-149600 dockerd[9879]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:16:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:16:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:16:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:16:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Jul 19 04:16:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:16:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:16:08 functional-149600 dockerd[10215]: time="2024-07-19T04:16:08.121334668Z" level=info msg="Starting up"
	Jul 19 04:17:08 functional-149600 dockerd[10215]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:17:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:17:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:17:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:17:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
	Jul 19 04:17:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:17:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:17:08 functional-149600 dockerd[10435]: time="2024-07-19T04:17:08.312026488Z" level=info msg="Starting up"
	Jul 19 04:18:08 functional-149600 dockerd[10435]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:18:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:18:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:18:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:18:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 26.
	Jul 19 04:18:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:18:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:18:08 functional-149600 dockerd[10658]: time="2024-07-19T04:18:08.567478720Z" level=info msg="Starting up"
	Jul 19 04:19:08 functional-149600 dockerd[10658]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:19:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:19:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:19:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:19:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 27.
	Jul 19 04:19:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:19:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:19:08 functional-149600 dockerd[11028]: time="2024-07-19T04:19:08.881713903Z" level=info msg="Starting up"
	Jul 19 04:19:41 functional-149600 dockerd[11028]: time="2024-07-19T04:19:41.104825080Z" level=info msg="Processing signal 'terminated'"
	Jul 19 04:20:08 functional-149600 dockerd[11028]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:20:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:20:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:20:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:20:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:20:08 functional-149600 dockerd[11475]: time="2024-07-19T04:20:08.959849556Z" level=info msg="Starting up"
	Jul 19 04:21:08 functional-149600 dockerd[11475]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:21:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:21:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:21:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0719 04:21:09.089413   11908 out.go:239] * 
	W0719 04:21:09.091413   11908 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 04:21:09.095524   11908 out.go:177] 
	
	
	==> Docker <==
	Jul 19 04:21:09 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:21:09 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:21:09 functional-149600 dockerd[11674]: time="2024-07-19T04:21:09.212884325Z" level=info msg="Starting up"
	Jul 19 04:22:09 functional-149600 dockerd[11674]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:22:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:22:09Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 19 04:22:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:22:09Z" level=error msg="error getting RW layer size for container ID '86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:22:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:22:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d'"
	Jul 19 04:22:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:22:09Z" level=error msg="error getting RW layer size for container ID '905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:22:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:22:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2'"
	Jul 19 04:22:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:22:09Z" level=error msg="error getting RW layer size for container ID '73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:22:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:22:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0'"
	Jul 19 04:22:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:22:09Z" level=error msg="error getting RW layer size for container ID 'db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:22:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:22:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703'"
	Jul 19 04:22:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:22:09Z" level=error msg="error getting RW layer size for container ID '2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:22:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:22:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f'"
	Jul 19 04:22:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:22:09Z" level=error msg="error getting RW layer size for container ID '896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:22:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:22:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f'"
	Jul 19 04:22:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:22:09Z" level=error msg="error getting RW layer size for container ID '4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:22:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:22:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26'"
	Jul 19 04:22:09 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:22:09 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:22:09 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:22:09 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jul 19 04:22:09 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:22:09 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-19T04:22:11Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.217497] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +0.196783] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[  +0.258312] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +8.589795] systemd-fstab-generator[1435]: Ignoring "noauto" option for root device
	[  +0.109572] kauditd_printk_skb: 202 callbacks suppressed
	[  +5.479934] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.746047] systemd-fstab-generator[1680]: Ignoring "noauto" option for root device
	[  +6.463791] systemd-fstab-generator[1887]: Ignoring "noauto" option for root device
	[  +0.101637] kauditd_printk_skb: 48 callbacks suppressed
	[Jul19 03:50] systemd-fstab-generator[2289]: Ignoring "noauto" option for root device
	[  +0.137056] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.913934] systemd-fstab-generator[2516]: Ignoring "noauto" option for root device
	[  +0.188713] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.060318] hrtimer: interrupt took 3867561 ns
	[  +7.580998] kauditd_printk_skb: 90 callbacks suppressed
	[Jul19 03:51] systemd-fstab-generator[3837]: Ignoring "noauto" option for root device
	[  +0.149840] kauditd_printk_skb: 10 callbacks suppressed
	[  +0.466272] systemd-fstab-generator[3873]: Ignoring "noauto" option for root device
	[  +0.296379] systemd-fstab-generator[3899]: Ignoring "noauto" option for root device
	[  +0.316733] systemd-fstab-generator[3913]: Ignoring "noauto" option for root device
	[  +5.318922] kauditd_printk_skb: 89 callbacks suppressed
	[Jul19 04:19] systemd-fstab-generator[11331]: Ignoring "noauto" option for root device
	[  +0.554891] systemd-fstab-generator[11364]: Ignoring "noauto" option for root device
	[  +0.216365] systemd-fstab-generator[11376]: Ignoring "noauto" option for root device
	[  +0.239966] systemd-fstab-generator[11390]: Ignoring "noauto" option for root device
	
	
	==> kernel <==
	 04:23:09 up 35 min,  0 users,  load average: 0.01, 0.00, 0.00
	Linux functional-149600 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 19 04:23:05 functional-149600 kubelet[2296]: E0719 04:23:05.460896    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?resourceVersion=0&timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 04:23:05 functional-149600 kubelet[2296]: E0719 04:23:05.462118    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 04:23:05 functional-149600 kubelet[2296]: E0719 04:23:05.463072    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 04:23:05 functional-149600 kubelet[2296]: E0719 04:23:05.464164    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 04:23:05 functional-149600 kubelet[2296]: E0719 04:23:05.465287    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 04:23:05 functional-149600 kubelet[2296]: E0719 04:23:05.465379    2296 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 19 04:23:06 functional-149600 kubelet[2296]: E0719 04:23:06.182141    2296 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/kube-apiserver-functional-149600.17e380d1c652e7be\": dial tcp 172.28.160.82:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-149600.17e380d1c652e7be  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-149600,UID:73b077f74b512a0b97280a590f1f1546,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://172.28.160.82:8441/readyz\": dial tcp 172.28.160.82:8441: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-149600,},FirstTimestamp:2024-07-19 03:52:01.049503678 +0000 UTC m=+116.102752443,LastTimestam
p:2024-07-19 03:52:05.118189382 +0000 UTC m=+120.171438047,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-149600,}"
	Jul 19 04:23:08 functional-149600 kubelet[2296]: E0719 04:23:08.870911    2296 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 31m18.498676576s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jul 19 04:23:09 functional-149600 kubelet[2296]: E0719 04:23:09.326752    2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused" interval="7s"
	Jul 19 04:23:09 functional-149600 kubelet[2296]: E0719 04:23:09.605752    2296 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 19 04:23:09 functional-149600 kubelet[2296]: E0719 04:23:09.605919    2296 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:23:09 functional-149600 kubelet[2296]: E0719 04:23:09.606214    2296 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:23:09 functional-149600 kubelet[2296]: E0719 04:23:09.608490    2296 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:23:09 functional-149600 kubelet[2296]: E0719 04:23:09.608525    2296 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:23:09 functional-149600 kubelet[2296]: E0719 04:23:09.608551    2296 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 19 04:23:09 functional-149600 kubelet[2296]: E0719 04:23:09.608574    2296 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:23:09 functional-149600 kubelet[2296]: I0719 04:23:09.608588    2296 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:23:09 functional-149600 kubelet[2296]: E0719 04:23:09.608693    2296 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 19 04:23:09 functional-149600 kubelet[2296]: E0719 04:23:09.608720    2296 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:23:09 functional-149600 kubelet[2296]: E0719 04:23:09.609802    2296 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 19 04:23:09 functional-149600 kubelet[2296]: E0719 04:23:09.610198    2296 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 19 04:23:09 functional-149600 kubelet[2296]: E0719 04:23:09.610231    2296 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:23:09 functional-149600 kubelet[2296]: E0719 04:23:09.611611    2296 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 19 04:23:09 functional-149600 kubelet[2296]: E0719 04:23:09.613548    2296 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 19 04:23:09 functional-149600 kubelet[2296]: E0719 04:23:09.614021    2296 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:21:21.828771    6536 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0719 04:22:09.245373    6536 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:22:09.276565    6536 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:22:09.309274    6536 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:22:09.340521    6536 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:22:09.372190    6536 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:22:09.404871    6536 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:22:09.449601    6536 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:22:09.481765    6536 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-149600 -n functional-149600
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-149600 -n functional-149600: exit status 2 (12.1226476s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:23:10.307510    5948 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-149600" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (300.71s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (120.58s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-149600 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-149600 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (10.5118763s)

                                                
                                                
** stderr ** 
	E0719 04:23:24.594015    5064 memcache.go:265] couldn't get current server API group list: Get "https://172.28.160.82:8441/api?timeout=32s": dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.
	E0719 04:23:26.707195    5064 memcache.go:265] couldn't get current server API group list: Get "https://172.28.160.82:8441/api?timeout=32s": dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.
	E0719 04:23:28.743526    5064 memcache.go:265] couldn't get current server API group list: Get "https://172.28.160.82:8441/api?timeout=32s": dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.
	E0719 04:23:30.771683    5064 memcache.go:265] couldn't get current server API group list: Get "https://172.28.160.82:8441/api?timeout=32s": dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.
	E0719 04:23:32.820987    5064 memcache.go:265] couldn't get current server API group list: Get "https://172.28.160.82:8441/api?timeout=32s": dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.
	Unable to connect to the server: dial tcp 172.28.160.82:8441: connectex: No connection could be made because the target machine actively refused it.

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-149600 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-149600 -n functional-149600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-149600 -n functional-149600: exit status 2 (12.0004663s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:23:32.925695    3116 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-149600 logs -n 25
E0719 04:25:10.093548    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-149600 logs -n 25: (1m25.5883133s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| unpause | nospam-907600 --log_dir                                                  | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-907600 --log_dir                                                  | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| unpause | nospam-907600 --log_dir                                                  | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600              |                   |                   |         |                     |                     |
	|         | unpause                                                                  |                   |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                                  | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                                  | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                                  | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600              |                   |                   |         |                     |                     |
	|         | stop                                                                     |                   |                   |         |                     |                     |
	| delete  | -p nospam-907600                                                         | nospam-907600     | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	| start   | -p functional-149600                                                     | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:50 UTC |
	|         | --memory=4000                                                            |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                               |                   |                   |         |                     |                     |
	| start   | -p functional-149600                                                     | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:50 UTC |                     |
	|         | --alsologtostderr -v=8                                                   |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache add                                              | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:59 UTC | 19 Jul 24 04:01 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache add                                              | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:01 UTC | 19 Jul 24 04:03 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache add                                              | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:03 UTC | 19 Jul 24 04:05 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache add                                              | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:05 UTC | 19 Jul 24 04:06 UTC |
	|         | minikube-local-cache-test:functional-149600                              |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache delete                                           | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC | 19 Jul 24 04:06 UTC |
	|         | minikube-local-cache-test:functional-149600                              |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC | 19 Jul 24 04:06 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |                   |         |                     |                     |
	| cache   | list                                                                     | minikube          | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC | 19 Jul 24 04:06 UTC |
	| ssh     | functional-149600 ssh sudo                                               | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC |                     |
	|         | crictl images                                                            |                   |                   |         |                     |                     |
	| ssh     | functional-149600                                                        | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC |                     |
	|         | ssh sudo docker rmi                                                      |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| ssh     | functional-149600 ssh                                                    | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:07 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | functional-149600 cache reload                                           | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:07 UTC | 19 Jul 24 04:09 UTC |
	| ssh     | functional-149600 ssh                                                    | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:09 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:09 UTC | 19 Jul 24 04:09 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |                   |         |                     |                     |
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:09 UTC | 19 Jul 24 04:09 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-149600 kubectl --                                             | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:12 UTC |                     |
	|         | --context functional-149600                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-149600                                                     | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:18 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 04:18:21
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 04:18:21.766878   11908 out.go:291] Setting OutFile to fd 508 ...
	I0719 04:18:21.767672   11908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:18:21.767672   11908 out.go:304] Setting ErrFile to fd 628...
	I0719 04:18:21.767704   11908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:18:21.802747   11908 out.go:298] Setting JSON to false
	I0719 04:18:21.806931   11908 start.go:129] hostinfo: {"hostname":"minikube6","uptime":21727,"bootTime":1721340973,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0719 04:18:21.807048   11908 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 04:18:21.811368   11908 out.go:177] * [functional-149600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 04:18:21.814314   11908 notify.go:220] Checking for updates...
	I0719 04:18:21.815240   11908 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 04:18:21.817700   11908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 04:18:21.821305   11908 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0719 04:18:21.823619   11908 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 04:18:21.827233   11908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:18:21.830726   11908 config.go:182] Loaded profile config "functional-149600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:18:21.830936   11908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:18:27.199268   11908 out.go:177] * Using the hyperv driver based on existing profile
	I0719 04:18:27.203184   11908 start.go:297] selected driver: hyperv
	I0719 04:18:27.203184   11908 start.go:901] validating driver "hyperv" against &{Name:functional-149600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-149600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.160.82 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:18:27.203184   11908 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 04:18:27.252525   11908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:18:27.252525   11908 cni.go:84] Creating CNI manager for ""
	I0719 04:18:27.252525   11908 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 04:18:27.252525   11908 start.go:340] cluster config:
	{Name:functional-149600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-149600 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.160.82 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:18:27.253102   11908 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 04:18:27.258660   11908 out.go:177] * Starting "functional-149600" primary control-plane node in "functional-149600" cluster
	I0719 04:18:27.260254   11908 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 04:18:27.261246   11908 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 04:18:27.261246   11908 cache.go:56] Caching tarball of preloaded images
	I0719 04:18:27.261246   11908 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 04:18:27.261246   11908 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 04:18:27.261246   11908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-149600\config.json ...
	I0719 04:18:27.263301   11908 start.go:360] acquireMachinesLock for functional-149600: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 04:18:27.263301   11908 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-149600"
	I0719 04:18:27.263301   11908 start.go:96] Skipping create...Using existing machine configuration
	I0719 04:18:27.263301   11908 fix.go:54] fixHost starting: 
	I0719 04:18:27.264677   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:30.104551   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:30.104551   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:30.104990   11908 fix.go:112] recreateIfNeeded on functional-149600: state=Running err=<nil>
	W0719 04:18:30.104990   11908 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 04:18:30.108813   11908 out.go:177] * Updating the running hyperv "functional-149600" VM ...
	I0719 04:18:30.112780   11908 machine.go:94] provisionDockerMachine start ...
	I0719 04:18:30.112780   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:32.281432   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:32.282235   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:32.282235   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:18:34.875102   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:18:34.875610   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:34.880845   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:18:34.881446   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:18:34.881446   11908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 04:18:35.021234   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-149600
	
	I0719 04:18:35.021324   11908 buildroot.go:166] provisioning hostname "functional-149600"
	I0719 04:18:35.021324   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:37.170899   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:37.170899   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:37.171700   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:18:39.728642   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:18:39.728642   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:39.734540   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:18:39.735114   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:18:39.735114   11908 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-149600 && echo "functional-149600" | sudo tee /etc/hostname
	I0719 04:18:39.893752   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-149600
	
	I0719 04:18:39.893752   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:42.020647   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:42.021626   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:42.021626   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:18:44.602600   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:18:44.602600   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:44.610065   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:18:44.610065   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:18:44.610065   11908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-149600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-149600/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-149600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:18:44.753558   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:18:44.753558   11908 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 04:18:44.753558   11908 buildroot.go:174] setting up certificates
	I0719 04:18:44.753558   11908 provision.go:84] configureAuth start
	I0719 04:18:44.753558   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:46.923705   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:46.923915   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:46.923915   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:18:49.456995   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:18:49.457146   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:49.457146   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:51.630822   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:51.630822   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:51.631464   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:18:54.211617   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:18:54.211617   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:54.211905   11908 provision.go:143] copyHostCerts
	I0719 04:18:54.222331   11908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0719 04:18:54.222331   11908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0719 04:18:54.223238   11908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 04:18:54.233187   11908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0719 04:18:54.233187   11908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0719 04:18:54.233187   11908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 04:18:54.242612   11908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0719 04:18:54.242612   11908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0719 04:18:54.242944   11908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 04:18:54.244582   11908 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-149600 san=[127.0.0.1 172.28.160.82 functional-149600 localhost minikube]
	I0719 04:18:54.390527   11908 provision.go:177] copyRemoteCerts
	I0719 04:18:54.401534   11908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:18:54.401534   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:56.573340   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:56.573340   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:56.573340   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:18:59.164667   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:18:59.164667   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:59.166132   11908 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 04:18:59.268712   11908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8670097s)
	I0719 04:18:59.269401   11908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 04:18:59.315850   11908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 04:18:59.360883   11908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 04:18:59.404491   11908 provision.go:87] duration metric: took 14.650763s to configureAuth
	I0719 04:18:59.404491   11908 buildroot.go:189] setting minikube options for container-runtime
	I0719 04:18:59.405435   11908 config.go:182] Loaded profile config "functional-149600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:18:59.405475   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:01.593936   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:01.593936   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:01.594118   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:04.230442   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:04.230442   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:04.239294   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:19:04.239760   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:19:04.239760   11908 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 04:19:04.380052   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 04:19:04.380052   11908 buildroot.go:70] root file system type: tmpfs
	I0719 04:19:04.380261   11908 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 04:19:04.380347   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:06.539303   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:06.539515   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:06.539515   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:09.127043   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:09.127043   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:09.133308   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:19:09.133448   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:19:09.133448   11908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 04:19:09.306386   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 04:19:09.306918   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:11.508256   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:11.508256   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:11.509277   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:14.068121   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:14.068121   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:14.074438   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:19:14.075220   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:19:14.075220   11908 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 04:19:14.221726   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:19:14.221726   11908 machine.go:97] duration metric: took 44.1084347s to provisionDockerMachine
	I0719 04:19:14.221726   11908 start.go:293] postStartSetup for "functional-149600" (driver="hyperv")
	I0719 04:19:14.221726   11908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:19:14.235570   11908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:19:14.235570   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:16.392176   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:16.393208   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:16.393208   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:18.931070   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:18.931973   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:18.932493   11908 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 04:19:19.038434   11908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8028089s)
	I0719 04:19:19.053175   11908 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:19:19.060791   11908 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 04:19:19.060791   11908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 04:19:19.060791   11908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 04:19:19.061625   11908 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
	I0719 04:19:19.064733   11908 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts -> hosts in /etc/test/nested/copy/9604
	I0719 04:19:19.078488   11908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9604
	I0719 04:19:19.096657   11908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
	I0719 04:19:19.141481   11908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts --> /etc/test/nested/copy/9604/hosts (40 bytes)
	I0719 04:19:19.186154   11908 start.go:296] duration metric: took 4.9643708s for postStartSetup
	I0719 04:19:19.186154   11908 fix.go:56] duration metric: took 51.9222508s for fixHost
	I0719 04:19:19.186154   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:21.337933   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:21.337933   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:21.337933   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:23.870420   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:23.870420   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:23.875775   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:19:23.876403   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:19:23.876403   11908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 04:19:24.012488   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721362764.016863919
	
	I0719 04:19:24.012488   11908 fix.go:216] guest clock: 1721362764.016863919
	I0719 04:19:24.012488   11908 fix.go:229] Guest: 2024-07-19 04:19:24.016863919 +0000 UTC Remote: 2024-07-19 04:19:19.1861548 +0000 UTC m=+57.580185601 (delta=4.830709119s)
	I0719 04:19:24.012488   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:26.182442   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:26.182676   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:26.182676   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:28.790985   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:28.790985   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:28.797512   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:19:28.798275   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:19:28.798275   11908 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721362764
	I0719 04:19:28.954624   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 04:19:24 UTC 2024
	
	I0719 04:19:28.954624   11908 fix.go:236] clock set: Fri Jul 19 04:19:24 UTC 2024
	 (err=<nil>)
	I0719 04:19:28.954624   11908 start.go:83] releasing machines lock for "functional-149600", held for 1m1.6906073s
	I0719 04:19:28.954952   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:31.171228   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:31.171393   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:31.171393   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:34.042286   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:34.042286   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:34.047433   11908 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 04:19:34.047433   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:34.059846   11908 ssh_runner.go:195] Run: cat /version.json
	I0719 04:19:34.059846   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:36.423615   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:36.423615   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:36.423615   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:36.523606   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:36.523628   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:36.523684   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:39.126747   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:39.126747   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:39.127737   11908 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 04:19:39.227169   11908 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1795571s)
	W0719 04:19:39.227169   11908 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 04:19:39.258833   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:39.258833   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:39.259777   11908 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	W0719 04:19:39.339726   11908 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 04:19:39.339839   11908 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 04:19:39.353457   11908 ssh_runner.go:235] Completed: cat /version.json: (5.2935496s)
	I0719 04:19:39.364534   11908 ssh_runner.go:195] Run: systemctl --version
	I0719 04:19:39.384482   11908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 04:19:39.392154   11908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 04:19:39.403112   11908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:19:39.419839   11908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 04:19:39.419839   11908 start.go:495] detecting cgroup driver to use...
	I0719 04:19:39.420108   11908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:19:39.467456   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 04:19:39.499239   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 04:19:39.522220   11908 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 04:19:39.533043   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 04:19:39.562936   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 04:19:39.594192   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 04:19:39.624160   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 04:19:39.654835   11908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:19:39.684342   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 04:19:39.714405   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 04:19:39.744483   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 04:19:39.773149   11908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:19:39.804037   11908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:19:39.833349   11908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:19:40.058814   11908 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 04:19:40.099575   11908 start.go:495] detecting cgroup driver to use...
	I0719 04:19:40.111657   11908 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 04:19:40.147724   11908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:19:40.182021   11908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:19:40.219208   11908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:19:40.252518   11908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 04:19:40.274665   11908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:19:40.317754   11908 ssh_runner.go:195] Run: which cri-dockerd
	I0719 04:19:40.334468   11908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 04:19:40.352225   11908 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 04:19:40.391447   11908 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 04:19:40.611469   11908 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 04:19:40.826485   11908 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 04:19:40.826637   11908 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 04:19:40.870608   11908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:19:41.079339   11908 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 04:21:08.989750   11908 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m27.9093148s)
	I0719 04:21:09.002113   11908 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0719 04:21:09.085419   11908 out.go:177] 
	W0719 04:21:09.088414   11908 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 03:48:58 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.531914350Z" level=info msg="Starting up"
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.534422132Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.535803677Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=684
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.567717825Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594617108Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594655809Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594718511Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594736112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594817914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595026521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595269429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595407134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595431535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595445135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595540038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595881749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598812246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598917149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599162457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599284261Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599462867Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599605372Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625338316Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625549423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625577124Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625596425Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625614725Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625734329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626111642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626552556Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626708661Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626731962Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626749163Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626764763Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626779864Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626807165Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626826665Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626842566Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626857266Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626871767Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626901168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626925168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626942469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626958269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626972470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626986970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627018171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627050773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627067473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627087974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627102874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627118075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627133475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627151576Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627179977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627207478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627223378Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627308681Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627497987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627603491Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627628491Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627642192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627659693Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627677693Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628139708Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628464419Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628586223Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628648825Z" level=info msg="containerd successfully booted in 0.062295s"
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.605880874Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.640734047Z" level=info msg="Loading containers: start."
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.813575066Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.031273218Z" level=info msg="Loading containers: done."
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052569890Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052711603Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174428772Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:00 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174659093Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:31 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.327916124Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.330803748Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332114659Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332413462Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332761765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:49:32 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:49:32 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:49:32 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.396667332Z" level=info msg="Starting up"
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.397798042Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.402462181Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1096
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.432470534Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459514962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459615563Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459667563Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459682563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459912965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459936465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460088967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460343269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460374469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460396969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460425770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460819273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463853798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463983400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464200501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464295002Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464331702Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464352803Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464795906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464850207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464884207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464929707Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464948008Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465078809Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465467012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465770315Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465863515Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465884716Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465898416Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465911416Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465922816Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465936016Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465964116Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465979716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465991216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466002317Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466032417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466048417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466060817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466073817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466093917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466108217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466120718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466132618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466145718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466159818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466170818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466182018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466193718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466207818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466226918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466239919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466250719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466362320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466382920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466470120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466490821Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466502121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466521321Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466787523Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467170726Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467422729Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467502129Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467596330Z" level=info msg="containerd successfully booted in 0.035978s"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.446816884Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.479266357Z" level=info msg="Loading containers: start."
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.611087768Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.727699751Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.826817487Z" level=info msg="Loading containers: done."
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851788197Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851961999Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902179022Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:33 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902385724Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.464303420Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466178836Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466444238Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466617340Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466645140Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:49:43 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:49:44 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:49:44 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:49:44 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.526170370Z" level=info msg="Starting up"
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.527875185Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.529085595Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1449
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.561806771Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.588986100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589119201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589175201Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589189602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589217102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589231002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589372603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589466304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589487004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589498304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589522204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589693506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.592940233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593046334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593164935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593288836Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593325336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593343737Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593655039Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593711040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593798040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593840041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593855841Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593915841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594246644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594583947Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594609647Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594625347Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594640447Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594659648Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594674648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594689748Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594715248Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594831949Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594864649Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594894750Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594912750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594927150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594938550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594949850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594961050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594988850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594999351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595010451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595022151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595034451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595044251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595054151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595064451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595080551Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595100051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595112651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595122752Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595253153Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595360854Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595377754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595405554Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595414854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595426254Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595435454Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595711057Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595836558Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595937958Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595991659Z" level=info msg="containerd successfully booted in 0.035148s"
	Jul 19 03:49:45 functional-149600 dockerd[1443]: time="2024-07-19T03:49:45.571450281Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:49:48 functional-149600 dockerd[1443]: time="2024-07-19T03:49:48.883728000Z" level=info msg="Loading containers: start."
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.006401134Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.127192752Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.218929925Z" level=info msg="Loading containers: done."
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249486583Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249683984Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299922608Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:49 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299991408Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812314634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812468840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.813783594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.814181811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.823808405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826750026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826767127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826866331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899025089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899127893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899277199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899669815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918254477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918562790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.920597373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.921124695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387701734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387801838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387829539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387963045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436646441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436931752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437090859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437275166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539345743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539671255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.540445185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.541481126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550468276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550879792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.551210305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.555850986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.238972986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239834399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239916700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.240127804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589855933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589966535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589987335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.590436642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002502056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002639758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002654558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.003059965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053221935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053490639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053805144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.054875960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794781741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794871142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794886242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794980442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806139221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806918426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807029827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807551631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.625163713Z" level=info msg="ignoring event" container=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.629961233Z" level=info msg="shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631114094Z" level=warning msg="cleaning up after shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631402359Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.674442159Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:50:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.838943371Z" level=info msg="ignoring event" container=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.839886257Z" level=info msg="shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840028839Z" level=warning msg="cleaning up after shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840046637Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303237678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303415569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.304773802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.305273178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593684961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593784156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593803755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593913350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:51:50 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:51:50 functional-149600 dockerd[1443]: time="2024-07-19T03:51:50.861615472Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079285636Z" level=info msg="shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079436436Z" level=warning msg="cleaning up after shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079453436Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.080991335Z" level=info msg="ignoring event" container=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.090996234Z" level=info msg="ignoring event" container=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091838634Z" level=info msg="shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091953834Z" level=warning msg="cleaning up after shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091968234Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.112734230Z" level=info msg="ignoring event" container=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116127330Z" level=info msg="shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116200030Z" level=warning msg="cleaning up after shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116210930Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116537230Z" level=info msg="shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116585530Z" level=warning msg="cleaning up after shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116614030Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116823530Z" level=info msg="ignoring event" container=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116849530Z" level=info msg="ignoring event" container=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116946330Z" level=info msg="ignoring event" container=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116988930Z" level=info msg="ignoring event" container=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122714429Z" level=info msg="shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122848129Z" level=warning msg="cleaning up after shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122861729Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128254128Z" level=info msg="shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128388728Z" level=warning msg="cleaning up after shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128443128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131550327Z" level=info msg="shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131620327Z" level=warning msg="cleaning up after shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131665527Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148015624Z" level=info msg="shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148155124Z" level=warning msg="cleaning up after shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148209624Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182402919Z" level=info msg="shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182503119Z" level=warning msg="cleaning up after shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182514319Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183465819Z" level=info msg="shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183548319Z" level=warning msg="cleaning up after shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183560019Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185575018Z" level=info msg="ignoring event" container=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185722318Z" level=info msg="ignoring event" container=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185758518Z" level=info msg="ignoring event" container=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185811118Z" level=info msg="ignoring event" container=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185852418Z" level=info msg="ignoring event" container=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186041918Z" level=info msg="shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186095318Z" level=warning msg="cleaning up after shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186139118Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187552418Z" level=info msg="shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187672518Z" level=warning msg="cleaning up after shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187687018Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987746429Z" level=info msg="shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987797629Z" level=warning msg="cleaning up after shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987859329Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1443]: time="2024-07-19T03:51:55.988258129Z" level=info msg="ignoring event" container=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:56 functional-149600 dockerd[1449]: time="2024-07-19T03:51:56.011512525Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:51:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.013086308Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.070091705Z" level=info msg="ignoring event" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.070778533Z" level=info msg="shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071068387Z" level=warning msg="cleaning up after shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071124597Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.147257850Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148746827Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148999274Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.149087991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:52:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Consumed 5.480s CPU time.
	Jul 19 03:52:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:52:02 functional-149600 dockerd[4315]: time="2024-07-19T03:52:02.207309394Z" level=info msg="Starting up"
	Jul 19 03:53:02 functional-149600 dockerd[4315]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:53:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Jul 19 03:53:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:53:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:53:02 functional-149600 dockerd[4517]: time="2024-07-19T03:53:02.427497928Z" level=info msg="Starting up"
	Jul 19 03:54:02 functional-149600 dockerd[4517]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:54:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:54:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:54:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:54:02 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jul 19 03:54:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:54:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:54:02 functional-149600 dockerd[4794]: time="2024-07-19T03:54:02.625522087Z" level=info msg="Starting up"
	Jul 19 03:55:02 functional-149600 dockerd[4794]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:55:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:55:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:55:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:55:02 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Jul 19 03:55:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:55:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:55:02 functional-149600 dockerd[5024]: time="2024-07-19T03:55:02.867963022Z" level=info msg="Starting up"
	Jul 19 03:56:02 functional-149600 dockerd[5024]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:56:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:56:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:56:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:56:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Jul 19 03:56:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:56:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:56:03 functional-149600 dockerd[5250]: time="2024-07-19T03:56:03.114424888Z" level=info msg="Starting up"
	Jul 19 03:57:03 functional-149600 dockerd[5250]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:57:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:57:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:57:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:57:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Jul 19 03:57:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:57:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:57:03 functional-149600 dockerd[5584]: time="2024-07-19T03:57:03.385021046Z" level=info msg="Starting up"
	Jul 19 03:58:03 functional-149600 dockerd[5584]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:58:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:58:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:58:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:58:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Jul 19 03:58:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:58:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:58:03 functional-149600 dockerd[5803]: time="2024-07-19T03:58:03.585932078Z" level=info msg="Starting up"
	Jul 19 03:59:03 functional-149600 dockerd[5803]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:59:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:59:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:59:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:59:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Jul 19 03:59:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:59:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:59:03 functional-149600 dockerd[6023]: time="2024-07-19T03:59:03.812709134Z" level=info msg="Starting up"
	Jul 19 04:00:03 functional-149600 dockerd[6023]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:00:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:00:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:00:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:00:04 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
	Jul 19 04:00:04 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:00:04 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:00:04 functional-149600 dockerd[6281]: time="2024-07-19T04:00:04.125100395Z" level=info msg="Starting up"
	Jul 19 04:01:04 functional-149600 dockerd[6281]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:01:04 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:01:04 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:01:04 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:01:04 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
	Jul 19 04:01:04 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:01:04 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:01:04 functional-149600 dockerd[6502]: time="2024-07-19T04:01:04.384065143Z" level=info msg="Starting up"
	Jul 19 04:02:04 functional-149600 dockerd[6502]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:02:04 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:02:04 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:02:04 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:02:04 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
	Jul 19 04:02:04 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:02:04 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:02:04 functional-149600 dockerd[6727]: time="2024-07-19T04:02:04.629921832Z" level=info msg="Starting up"
	Jul 19 04:03:04 functional-149600 dockerd[6727]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:03:04 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:03:04 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:03:04 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:03:04 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Jul 19 04:03:04 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:03:04 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:03:04 functional-149600 dockerd[6945]: time="2024-07-19T04:03:04.881594773Z" level=info msg="Starting up"
	Jul 19 04:04:04 functional-149600 dockerd[6945]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:04:04 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:04:04 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:04:04 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:04:05 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Jul 19 04:04:05 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:04:05 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:04:05 functional-149600 dockerd[7168]: time="2024-07-19T04:04:05.123312469Z" level=info msg="Starting up"
	Jul 19 04:05:05 functional-149600 dockerd[7168]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:05:05 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:05:05 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:05:05 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:05:05 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
	Jul 19 04:05:05 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:05:05 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:05:05 functional-149600 dockerd[7390]: time="2024-07-19T04:05:05.382469694Z" level=info msg="Starting up"
	Jul 19 04:06:05 functional-149600 dockerd[7390]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:06:05 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:06:05 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:06:05 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:06:05 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Jul 19 04:06:05 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:06:05 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:06:05 functional-149600 dockerd[7633]: time="2024-07-19T04:06:05.593228245Z" level=info msg="Starting up"
	Jul 19 04:07:05 functional-149600 dockerd[7633]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:07:05 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:07:05 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:07:05 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:07:05 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
	Jul 19 04:07:05 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:07:05 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:07:05 functional-149600 dockerd[7873]: time="2024-07-19T04:07:05.880412514Z" level=info msg="Starting up"
	Jul 19 04:08:05 functional-149600 dockerd[7873]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:08:05 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:08:05 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:08:05 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:08:06 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
	Jul 19 04:08:06 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:08:06 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:08:06 functional-149600 dockerd[8117]: time="2024-07-19T04:08:06.127986862Z" level=info msg="Starting up"
	Jul 19 04:09:06 functional-149600 dockerd[8117]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:09:06 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:09:06 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:09:06 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:09:06 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Jul 19 04:09:06 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:09:06 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:09:06 functional-149600 dockerd[8352]: time="2024-07-19T04:09:06.371958374Z" level=info msg="Starting up"
	Jul 19 04:10:06 functional-149600 dockerd[8352]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:10:06 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:10:06 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:10:06 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:10:06 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Jul 19 04:10:06 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:10:06 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:10:06 functional-149600 dockerd[8667]: time="2024-07-19T04:10:06.620432494Z" level=info msg="Starting up"
	Jul 19 04:11:06 functional-149600 dockerd[8667]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:11:06 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:11:06 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:11:06 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:11:06 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
	Jul 19 04:11:06 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:11:06 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:11:06 functional-149600 dockerd[8889]: time="2024-07-19T04:11:06.842404443Z" level=info msg="Starting up"
	Jul 19 04:12:06 functional-149600 dockerd[8889]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:12:06 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:12:06 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:12:06 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:12:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Jul 19 04:12:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:12:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:12:07 functional-149600 dockerd[9109]: time="2024-07-19T04:12:07.102473619Z" level=info msg="Starting up"
	Jul 19 04:13:07 functional-149600 dockerd[9109]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:13:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:13:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:13:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:13:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Jul 19 04:13:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:13:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:13:07 functional-149600 dockerd[9440]: time="2024-07-19T04:13:07.376165478Z" level=info msg="Starting up"
	Jul 19 04:14:07 functional-149600 dockerd[9440]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:14:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:14:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:14:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:14:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Jul 19 04:14:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:14:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:14:07 functional-149600 dockerd[9662]: time="2024-07-19T04:14:07.590302364Z" level=info msg="Starting up"
	Jul 19 04:15:07 functional-149600 dockerd[9662]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:15:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:15:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:15:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:15:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Jul 19 04:15:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:15:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:15:07 functional-149600 dockerd[9879]: time="2024-07-19T04:15:07.829795571Z" level=info msg="Starting up"
	Jul 19 04:16:07 functional-149600 dockerd[9879]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:16:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:16:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:16:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:16:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Jul 19 04:16:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:16:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:16:08 functional-149600 dockerd[10215]: time="2024-07-19T04:16:08.121334668Z" level=info msg="Starting up"
	Jul 19 04:17:08 functional-149600 dockerd[10215]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:17:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:17:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:17:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:17:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
	Jul 19 04:17:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:17:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:17:08 functional-149600 dockerd[10435]: time="2024-07-19T04:17:08.312026488Z" level=info msg="Starting up"
	Jul 19 04:18:08 functional-149600 dockerd[10435]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:18:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:18:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:18:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:18:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 26.
	Jul 19 04:18:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:18:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:18:08 functional-149600 dockerd[10658]: time="2024-07-19T04:18:08.567478720Z" level=info msg="Starting up"
	Jul 19 04:19:08 functional-149600 dockerd[10658]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:19:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:19:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:19:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:19:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 27.
	Jul 19 04:19:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:19:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:19:08 functional-149600 dockerd[11028]: time="2024-07-19T04:19:08.881713903Z" level=info msg="Starting up"
	Jul 19 04:19:41 functional-149600 dockerd[11028]: time="2024-07-19T04:19:41.104825080Z" level=info msg="Processing signal 'terminated'"
	Jul 19 04:20:08 functional-149600 dockerd[11028]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:20:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:20:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:20:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:20:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:20:08 functional-149600 dockerd[11475]: time="2024-07-19T04:20:08.959849556Z" level=info msg="Starting up"
	Jul 19 04:21:08 functional-149600 dockerd[11475]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:21:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:21:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:21:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0719 04:21:09.089413   11908 out.go:239] * 
	W0719 04:21:09.091413   11908 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 04:21:09.095524   11908 out.go:177] 
	
	
	==> Docker <==
	Jul 19 04:23:09 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:23:09 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:23:09 functional-149600 dockerd[12219]: time="2024-07-19T04:23:09.806091453Z" level=info msg="Starting up"
	Jul 19 04:24:09 functional-149600 dockerd[12219]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:24:09 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:24:09 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:24:09 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:24:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:24:09Z" level=error msg="error getting RW layer size for container ID '905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:24:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:24:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2'"
	Jul 19 04:24:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:24:09Z" level=error msg="error getting RW layer size for container ID '4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:24:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:24:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26'"
	Jul 19 04:24:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:24:09Z" level=error msg="error getting RW layer size for container ID '73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:24:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:24:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0'"
	Jul 19 04:24:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:24:09Z" level=error msg="error getting RW layer size for container ID 'db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:24:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:24:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703'"
	Jul 19 04:24:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:24:09Z" level=error msg="error getting RW layer size for container ID '2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:24:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:24:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f'"
	Jul 19 04:24:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:24:09Z" level=error msg="error getting RW layer size for container ID '896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:24:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:24:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f'"
	Jul 19 04:24:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:24:09Z" level=error msg="error getting RW layer size for container ID '86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:24:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:24:09Z" level=error msg="Set backoffDuration to : 1m0s for container ID '86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d'"
	Jul 19 04:24:09 functional-149600 cri-dockerd[1341]: time="2024-07-19T04:24:09Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 19 04:24:10 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Jul 19 04:24:10 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:24:10 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-19T04:24:12Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.217497] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +0.196783] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[  +0.258312] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +8.589795] systemd-fstab-generator[1435]: Ignoring "noauto" option for root device
	[  +0.109572] kauditd_printk_skb: 202 callbacks suppressed
	[  +5.479934] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.746047] systemd-fstab-generator[1680]: Ignoring "noauto" option for root device
	[  +6.463791] systemd-fstab-generator[1887]: Ignoring "noauto" option for root device
	[  +0.101637] kauditd_printk_skb: 48 callbacks suppressed
	[Jul19 03:50] systemd-fstab-generator[2289]: Ignoring "noauto" option for root device
	[  +0.137056] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.913934] systemd-fstab-generator[2516]: Ignoring "noauto" option for root device
	[  +0.188713] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.060318] hrtimer: interrupt took 3867561 ns
	[  +7.580998] kauditd_printk_skb: 90 callbacks suppressed
	[Jul19 03:51] systemd-fstab-generator[3837]: Ignoring "noauto" option for root device
	[  +0.149840] kauditd_printk_skb: 10 callbacks suppressed
	[  +0.466272] systemd-fstab-generator[3873]: Ignoring "noauto" option for root device
	[  +0.296379] systemd-fstab-generator[3899]: Ignoring "noauto" option for root device
	[  +0.316733] systemd-fstab-generator[3913]: Ignoring "noauto" option for root device
	[  +5.318922] kauditd_printk_skb: 89 callbacks suppressed
	[Jul19 04:19] systemd-fstab-generator[11331]: Ignoring "noauto" option for root device
	[  +0.554891] systemd-fstab-generator[11364]: Ignoring "noauto" option for root device
	[  +0.216365] systemd-fstab-generator[11376]: Ignoring "noauto" option for root device
	[  +0.239966] systemd-fstab-generator[11390]: Ignoring "noauto" option for root device
	
	
	==> kernel <==
	 04:25:10 up 37 min,  0 users,  load average: 0.00, 0.00, 0.00
	Linux functional-149600 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 19 04:25:05 functional-149600 kubelet[2296]: E0719 04:25:05.494081    2296 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events/etcd-functional-149600.17e380d0a4b3c5a0\": dial tcp 172.28.160.82:8441: connect: connection refused" event="&Event{ObjectMeta:{etcd-functional-149600.17e380d0a4b3c5a0  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:etcd-functional-149600,UID:4fdafe8959874e1470025c76930cf082,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Unhealthy,Message:Liveness probe failed: Get \"http://127.0.0.1:2381/health?exclude=NOSPACE&serializable=true\": dial tcp 127.0.0.1:2381: connect: connection refused,Source:EventSource{Component:kubelet,Host:functional-149600,},FirstTimestamp:2024-07-19 03:51:56.190459296 +0000 UTC m=+111.243707961,LastTimestamp:2024-07-19 03:
52:06.191800747 +0000 UTC m=+121.245049412,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-149600,}"
	Jul 19 04:25:07 functional-149600 kubelet[2296]: E0719 04:25:07.527534    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?resourceVersion=0&timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 04:25:07 functional-149600 kubelet[2296]: E0719 04:25:07.528404    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 04:25:07 functional-149600 kubelet[2296]: E0719 04:25:07.529409    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 04:25:07 functional-149600 kubelet[2296]: E0719 04:25:07.530494    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 04:25:07 functional-149600 kubelet[2296]: E0719 04:25:07.531403    2296 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-149600\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused"
	Jul 19 04:25:07 functional-149600 kubelet[2296]: E0719 04:25:07.531498    2296 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 19 04:25:08 functional-149600 kubelet[2296]: E0719 04:25:08.365976    2296 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-149600?timeout=10s\": dial tcp 172.28.160.82:8441: connect: connection refused" interval="7s"
	Jul 19 04:25:08 functional-149600 kubelet[2296]: E0719 04:25:08.893271    2296 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 33m18.521037579s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jul 19 04:25:10 functional-149600 kubelet[2296]: E0719 04:25:10.160080    2296 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 19 04:25:10 functional-149600 kubelet[2296]: E0719 04:25:10.160169    2296 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:25:10 functional-149600 kubelet[2296]: E0719 04:25:10.160187    2296 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:25:10 functional-149600 kubelet[2296]: E0719 04:25:10.162386    2296 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:25:10 functional-149600 kubelet[2296]: E0719 04:25:10.163210    2296 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:25:10 functional-149600 kubelet[2296]: E0719 04:25:10.164506    2296 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 19 04:25:10 functional-149600 kubelet[2296]: E0719 04:25:10.164693    2296 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 19 04:25:10 functional-149600 kubelet[2296]: E0719 04:25:10.164791    2296 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:25:10 functional-149600 kubelet[2296]: I0719 04:25:10.164810    2296 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:25:10 functional-149600 kubelet[2296]: E0719 04:25:10.164850    2296 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 19 04:25:10 functional-149600 kubelet[2296]: E0719 04:25:10.164951    2296 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:25:10 functional-149600 kubelet[2296]: E0719 04:25:10.165197    2296 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 19 04:25:10 functional-149600 kubelet[2296]: E0719 04:25:10.165346    2296 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 19 04:25:10 functional-149600 kubelet[2296]: E0719 04:25:10.166238    2296 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 19 04:25:10 functional-149600 kubelet[2296]: E0719 04:25:10.166289    2296 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 19 04:25:10 functional-149600 kubelet[2296]: E0719 04:25:10.166806    2296 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:23:44.937207   12572 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0719 04:24:09.834320   12572 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:24:09.869382   12572 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:24:09.901225   12572 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:24:09.933434   12572 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:24:09.963084   12572 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:24:09.991986   12572 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:24:10.024686   12572 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:24:10.058025   12572 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-149600 -n functional-149600
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-149600 -n functional-149600: exit status 2 (12.149524s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:25:10.852938    8708 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "functional-149600" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (120.58s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (94.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-149600 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-149600 logs: exit status 1 (1m34.0242267s)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | binary-mirror-056600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC |                     |
	|         | binary-mirror-056600                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:58266                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-056600                                                                     | binary-mirror-056600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC | 19 Jul 24 03:27 UTC |
	| addons  | disable dashboard -p                                                                        | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC |                     |
	|         | addons-811100                                                                               |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC |                     |
	|         | addons-811100                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-811100 --wait=true                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC | 19 Jul 24 03:35 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |                   |         |                     |                     |
	|         | --driver=hyperv --addons=ingress                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:35 UTC |
	|         | -p addons-811100                                                                            |                      |                   |         |                     |                     |
	| ip      | addons-811100 ip                                                                            | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:35 UTC |
	| addons  | addons-811100 addons disable                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:35 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| ssh     | addons-811100 ssh cat                                                                       | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:35 UTC |
	|         | /opt/local-path-provisioner/pvc-114f4030-a1d1-4247-ab71-0d8af834e357_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:35 UTC |
	|         | addons-811100                                                                               |                      |                   |         |                     |                     |
	| addons  | addons-811100 addons disable                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:36 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:36 UTC |
	|         | -p addons-811100                                                                            |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-811100 addons                                                                        | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:36 UTC | 19 Jul 24 03:37 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-811100 addons disable                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:36 UTC | 19 Jul 24 03:37 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
	|         | addons-811100                                                                               |                      |                   |         |                     |                     |
	| addons  | addons-811100 addons disable                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |                   |         |                     |                     |
	| addons  | addons-811100 addons                                                                        | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ssh     | addons-811100 ssh curl -s                                                                   | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |                   |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |                   |         |                     |                     |
	| ip      | addons-811100 ip                                                                            | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
	| addons  | addons-811100 addons disable                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-811100 addons                                                                        | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
	|         | disable volumesnapshots                                                                     |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-811100 addons disable                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:38 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |                   |         |                     |                     |
	| addons  | addons-811100 addons disable                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:38 UTC | 19 Jul 24 03:38 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| stop    | -p addons-811100                                                                            | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:38 UTC | 19 Jul 24 03:39 UTC |
	| addons  | enable dashboard -p                                                                         | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:39 UTC | 19 Jul 24 03:39 UTC |
	|         | addons-811100                                                                               |                      |                   |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:39 UTC | 19 Jul 24 03:39 UTC |
	|         | addons-811100                                                                               |                      |                   |         |                     |                     |
	| addons  | disable gvisor -p                                                                           | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:39 UTC | 19 Jul 24 03:39 UTC |
	|         | addons-811100                                                                               |                      |                   |         |                     |                     |
	| delete  | -p addons-811100                                                                            | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:39 UTC | 19 Jul 24 03:40 UTC |
	| start   | -p nospam-907600 -n=1 --memory=2250 --wait=false                                            | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:40 UTC | 19 Jul 24 03:43 UTC |
	|         | --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                       |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| start   | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:43 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
	|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
	| start   | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
	|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
	| start   | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC |                     |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
	|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
	| pause   | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:44 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
	|         | pause                                                                                       |                      |                   |         |                     |                     |
	| pause   | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
	|         | pause                                                                                       |                      |                   |         |                     |                     |
	| pause   | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
	|         | pause                                                                                       |                      |                   |         |                     |                     |
	| unpause | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
	|         | unpause                                                                                     |                      |                   |         |                     |                     |
	| unpause | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
	|         | unpause                                                                                     |                      |                   |         |                     |                     |
	| unpause | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
	|         | unpause                                                                                     |                      |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
	|         | stop                                                                                        |                      |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
	|         | stop                                                                                        |                      |                   |         |                     |                     |
	| stop    | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
	|         | stop                                                                                        |                      |                   |         |                     |                     |
	| delete  | -p nospam-907600                                                                            | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
	| start   | -p functional-149600                                                                        | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:50 UTC |
	|         | --memory=4000                                                                               |                      |                   |         |                     |                     |
	|         | --apiserver-port=8441                                                                       |                      |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                                                  |                      |                   |         |                     |                     |
	| start   | -p functional-149600                                                                        | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:50 UTC |                     |
	|         | --alsologtostderr -v=8                                                                      |                      |                   |         |                     |                     |
	| cache   | functional-149600 cache add                                                                 | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:59 UTC | 19 Jul 24 04:01 UTC |
	|         | registry.k8s.io/pause:3.1                                                                   |                      |                   |         |                     |                     |
	| cache   | functional-149600 cache add                                                                 | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:01 UTC | 19 Jul 24 04:03 UTC |
	|         | registry.k8s.io/pause:3.3                                                                   |                      |                   |         |                     |                     |
	| cache   | functional-149600 cache add                                                                 | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:03 UTC | 19 Jul 24 04:05 UTC |
	|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
	| cache   | functional-149600 cache add                                                                 | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:05 UTC | 19 Jul 24 04:06 UTC |
	|         | minikube-local-cache-test:functional-149600                                                 |                      |                   |         |                     |                     |
	| cache   | functional-149600 cache delete                                                              | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC | 19 Jul 24 04:06 UTC |
	|         | minikube-local-cache-test:functional-149600                                                 |                      |                   |         |                     |                     |
	| cache   | delete                                                                                      | minikube             | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC | 19 Jul 24 04:06 UTC |
	|         | registry.k8s.io/pause:3.3                                                                   |                      |                   |         |                     |                     |
	| cache   | list                                                                                        | minikube             | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC | 19 Jul 24 04:06 UTC |
	| ssh     | functional-149600 ssh sudo                                                                  | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC |                     |
	|         | crictl images                                                                               |                      |                   |         |                     |                     |
	| ssh     | functional-149600                                                                           | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC |                     |
	|         | ssh sudo docker rmi                                                                         |                      |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
	| ssh     | functional-149600 ssh                                                                       | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:07 UTC |                     |
	|         | sudo crictl inspecti                                                                        |                      |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
	| cache   | functional-149600 cache reload                                                              | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:07 UTC | 19 Jul 24 04:09 UTC |
	| ssh     | functional-149600 ssh                                                                       | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:09 UTC |                     |
	|         | sudo crictl inspecti                                                                        |                      |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
	| cache   | delete                                                                                      | minikube             | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:09 UTC | 19 Jul 24 04:09 UTC |
	|         | registry.k8s.io/pause:3.1                                                                   |                      |                   |         |                     |                     |
	| cache   | delete                                                                                      | minikube             | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:09 UTC | 19 Jul 24 04:09 UTC |
	|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
	| kubectl | functional-149600 kubectl --                                                                | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:12 UTC |                     |
	|         | --context functional-149600                                                                 |                      |                   |         |                     |                     |
	|         | get pods                                                                                    |                      |                   |         |                     |                     |
	| start   | -p functional-149600                                                                        | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:18 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision                    |                      |                   |         |                     |                     |
	|         | --wait=all                                                                                  |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 04:18:21
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 04:18:21.766878   11908 out.go:291] Setting OutFile to fd 508 ...
	I0719 04:18:21.767672   11908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:18:21.767672   11908 out.go:304] Setting ErrFile to fd 628...
	I0719 04:18:21.767704   11908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:18:21.802747   11908 out.go:298] Setting JSON to false
	I0719 04:18:21.806931   11908 start.go:129] hostinfo: {"hostname":"minikube6","uptime":21727,"bootTime":1721340973,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0719 04:18:21.807048   11908 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 04:18:21.811368   11908 out.go:177] * [functional-149600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 04:18:21.814314   11908 notify.go:220] Checking for updates...
	I0719 04:18:21.815240   11908 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 04:18:21.817700   11908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 04:18:21.821305   11908 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0719 04:18:21.823619   11908 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 04:18:21.827233   11908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:18:21.830726   11908 config.go:182] Loaded profile config "functional-149600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:18:21.830936   11908 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:18:27.199268   11908 out.go:177] * Using the hyperv driver based on existing profile
	I0719 04:18:27.203184   11908 start.go:297] selected driver: hyperv
	I0719 04:18:27.203184   11908 start.go:901] validating driver "hyperv" against &{Name:functional-149600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:functional-149600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.160.82 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:18:27.203184   11908 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 04:18:27.252525   11908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:18:27.252525   11908 cni.go:84] Creating CNI manager for ""
	I0719 04:18:27.252525   11908 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 04:18:27.252525   11908 start.go:340] cluster config:
	{Name:functional-149600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-149600 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.160.82 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:18:27.253102   11908 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 04:18:27.258660   11908 out.go:177] * Starting "functional-149600" primary control-plane node in "functional-149600" cluster
	I0719 04:18:27.260254   11908 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 04:18:27.261246   11908 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 04:18:27.261246   11908 cache.go:56] Caching tarball of preloaded images
	I0719 04:18:27.261246   11908 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 04:18:27.261246   11908 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 04:18:27.261246   11908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-149600\config.json ...
	I0719 04:18:27.263301   11908 start.go:360] acquireMachinesLock for functional-149600: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 04:18:27.263301   11908 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-149600"
	I0719 04:18:27.263301   11908 start.go:96] Skipping create...Using existing machine configuration
	I0719 04:18:27.263301   11908 fix.go:54] fixHost starting: 
	I0719 04:18:27.264677   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:30.104551   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:30.104551   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:30.104990   11908 fix.go:112] recreateIfNeeded on functional-149600: state=Running err=<nil>
	W0719 04:18:30.104990   11908 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 04:18:30.108813   11908 out.go:177] * Updating the running hyperv "functional-149600" VM ...
	I0719 04:18:30.112780   11908 machine.go:94] provisionDockerMachine start ...
	I0719 04:18:30.112780   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:32.281432   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:32.282235   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:32.282235   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:18:34.875102   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:18:34.875610   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:34.880845   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:18:34.881446   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:18:34.881446   11908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 04:18:35.021234   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-149600
	
	I0719 04:18:35.021324   11908 buildroot.go:166] provisioning hostname "functional-149600"
	I0719 04:18:35.021324   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:37.170899   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:37.170899   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:37.171700   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:18:39.728642   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:18:39.728642   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:39.734540   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:18:39.735114   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:18:39.735114   11908 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-149600 && echo "functional-149600" | sudo tee /etc/hostname
	I0719 04:18:39.893752   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-149600
	
	I0719 04:18:39.893752   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:42.020647   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:42.021626   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:42.021626   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:18:44.602600   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:18:44.602600   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:44.610065   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:18:44.610065   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:18:44.610065   11908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-149600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-149600/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-149600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:18:44.753558   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:18:44.753558   11908 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 04:18:44.753558   11908 buildroot.go:174] setting up certificates
	I0719 04:18:44.753558   11908 provision.go:84] configureAuth start
	I0719 04:18:44.753558   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:46.923705   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:46.923915   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:46.923915   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:18:49.456995   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:18:49.457146   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:49.457146   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:51.630822   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:51.630822   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:51.631464   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:18:54.211617   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:18:54.211617   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:54.211905   11908 provision.go:143] copyHostCerts
	I0719 04:18:54.222331   11908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0719 04:18:54.222331   11908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0719 04:18:54.223238   11908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 04:18:54.233187   11908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0719 04:18:54.233187   11908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0719 04:18:54.233187   11908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 04:18:54.242612   11908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0719 04:18:54.242612   11908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0719 04:18:54.242944   11908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 04:18:54.244582   11908 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-149600 san=[127.0.0.1 172.28.160.82 functional-149600 localhost minikube]
	I0719 04:18:54.390527   11908 provision.go:177] copyRemoteCerts
	I0719 04:18:54.401534   11908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:18:54.401534   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:18:56.573340   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:18:56.573340   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:56.573340   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:18:59.164667   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:18:59.164667   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:18:59.166132   11908 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 04:18:59.268712   11908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8670097s)
	I0719 04:18:59.269401   11908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 04:18:59.315850   11908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0719 04:18:59.360883   11908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 04:18:59.404491   11908 provision.go:87] duration metric: took 14.650763s to configureAuth
	I0719 04:18:59.404491   11908 buildroot.go:189] setting minikube options for container-runtime
	I0719 04:18:59.405435   11908 config.go:182] Loaded profile config "functional-149600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:18:59.405475   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:01.593936   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:01.593936   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:01.594118   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:04.230442   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:04.230442   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:04.239294   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:19:04.239760   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:19:04.239760   11908 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 04:19:04.380052   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 04:19:04.380052   11908 buildroot.go:70] root file system type: tmpfs
	I0719 04:19:04.380261   11908 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 04:19:04.380347   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:06.539303   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:06.539515   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:06.539515   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:09.127043   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:09.127043   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:09.133308   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:19:09.133448   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:19:09.133448   11908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 04:19:09.306386   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 04:19:09.306918   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:11.508256   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:11.508256   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:11.509277   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:14.068121   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:14.068121   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:14.074438   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:19:14.075220   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:19:14.075220   11908 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 04:19:14.221726   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:19:14.221726   11908 machine.go:97] duration metric: took 44.1084347s to provisionDockerMachine
	I0719 04:19:14.221726   11908 start.go:293] postStartSetup for "functional-149600" (driver="hyperv")
	I0719 04:19:14.221726   11908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:19:14.235570   11908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:19:14.235570   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:16.392176   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:16.393208   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:16.393208   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:18.931070   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:18.931973   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:18.932493   11908 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 04:19:19.038434   11908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8028089s)
	I0719 04:19:19.053175   11908 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:19:19.060791   11908 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 04:19:19.060791   11908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 04:19:19.060791   11908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 04:19:19.061625   11908 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
	I0719 04:19:19.064733   11908 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts -> hosts in /etc/test/nested/copy/9604
	I0719 04:19:19.078488   11908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9604
	I0719 04:19:19.096657   11908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
	I0719 04:19:19.141481   11908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts --> /etc/test/nested/copy/9604/hosts (40 bytes)
	I0719 04:19:19.186154   11908 start.go:296] duration metric: took 4.9643708s for postStartSetup
	I0719 04:19:19.186154   11908 fix.go:56] duration metric: took 51.9222508s for fixHost
	I0719 04:19:19.186154   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:21.337933   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:21.337933   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:21.337933   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:23.870420   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:23.870420   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:23.875775   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:19:23.876403   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:19:23.876403   11908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 04:19:24.012488   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721362764.016863919
	
	I0719 04:19:24.012488   11908 fix.go:216] guest clock: 1721362764.016863919
	I0719 04:19:24.012488   11908 fix.go:229] Guest: 2024-07-19 04:19:24.016863919 +0000 UTC Remote: 2024-07-19 04:19:19.1861548 +0000 UTC m=+57.580185601 (delta=4.830709119s)
	I0719 04:19:24.012488   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:26.182442   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:26.182676   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:26.182676   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:28.790985   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:28.790985   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:28.797512   11908 main.go:141] libmachine: Using SSH client type: native
	I0719 04:19:28.798275   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
	I0719 04:19:28.798275   11908 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721362764
	I0719 04:19:28.954624   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 04:19:24 UTC 2024
	
	I0719 04:19:28.954624   11908 fix.go:236] clock set: Fri Jul 19 04:19:24 UTC 2024
	 (err=<nil>)
	I0719 04:19:28.954624   11908 start.go:83] releasing machines lock for "functional-149600", held for 1m1.6906073s
	I0719 04:19:28.954952   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:31.171228   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:31.171393   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:31.171393   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:34.042286   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:34.042286   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:34.047433   11908 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 04:19:34.047433   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:34.059846   11908 ssh_runner.go:195] Run: cat /version.json
	I0719 04:19:34.059846   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
	I0719 04:19:36.423615   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:36.423615   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:36.423615   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:36.523606   11908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:19:36.523628   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:36.523684   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
	I0719 04:19:39.126747   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:39.126747   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:39.127737   11908 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	I0719 04:19:39.227169   11908 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1795571s)
	W0719 04:19:39.227169   11908 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 04:19:39.258833   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82
	
	I0719 04:19:39.258833   11908 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:19:39.259777   11908 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
	W0719 04:19:39.339726   11908 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 04:19:39.339839   11908 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 04:19:39.353457   11908 ssh_runner.go:235] Completed: cat /version.json: (5.2935496s)
	I0719 04:19:39.364534   11908 ssh_runner.go:195] Run: systemctl --version
	I0719 04:19:39.384482   11908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 04:19:39.392154   11908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 04:19:39.403112   11908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:19:39.419839   11908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0719 04:19:39.419839   11908 start.go:495] detecting cgroup driver to use...
	I0719 04:19:39.420108   11908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:19:39.467456   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 04:19:39.499239   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 04:19:39.522220   11908 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 04:19:39.533043   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 04:19:39.562936   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 04:19:39.594192   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 04:19:39.624160   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 04:19:39.654835   11908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:19:39.684342   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 04:19:39.714405   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 04:19:39.744483   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 04:19:39.773149   11908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:19:39.804037   11908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:19:39.833349   11908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:19:40.058814   11908 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 04:19:40.099575   11908 start.go:495] detecting cgroup driver to use...
	I0719 04:19:40.111657   11908 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 04:19:40.147724   11908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:19:40.182021   11908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:19:40.219208   11908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:19:40.252518   11908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 04:19:40.274665   11908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:19:40.317754   11908 ssh_runner.go:195] Run: which cri-dockerd
	I0719 04:19:40.334468   11908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 04:19:40.352225   11908 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 04:19:40.391447   11908 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 04:19:40.611469   11908 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 04:19:40.826485   11908 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 04:19:40.826637   11908 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 04:19:40.870608   11908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:19:41.079339   11908 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 04:21:08.989750   11908 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m27.9093148s)
	I0719 04:21:09.002113   11908 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0719 04:21:09.085419   11908 out.go:177] 
	W0719 04:21:09.088414   11908 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 03:48:58 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.531914350Z" level=info msg="Starting up"
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.534422132Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.535803677Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=684
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.567717825Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594617108Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594655809Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594718511Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594736112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594817914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595026521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595269429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595407134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595431535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595445135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595540038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595881749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598812246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598917149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599162457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599284261Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599462867Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599605372Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625338316Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625549423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625577124Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625596425Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625614725Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625734329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626111642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626552556Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626708661Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626731962Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626749163Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626764763Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626779864Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626807165Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626826665Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626842566Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626857266Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626871767Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626901168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626925168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626942469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626958269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626972470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626986970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627018171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627050773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627067473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627087974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627102874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627118075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627133475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627151576Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627179977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627207478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627223378Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627308681Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627497987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627603491Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627628491Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627642192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627659693Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627677693Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628139708Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628464419Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628586223Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628648825Z" level=info msg="containerd successfully booted in 0.062295s"
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.605880874Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.640734047Z" level=info msg="Loading containers: start."
	Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.813575066Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.031273218Z" level=info msg="Loading containers: done."
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052569890Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052711603Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174428772Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:00 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174659093Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:31 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.327916124Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.330803748Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332114659Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332413462Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332761765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:49:32 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:49:32 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:49:32 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.396667332Z" level=info msg="Starting up"
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.397798042Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.402462181Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1096
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.432470534Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459514962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459615563Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459667563Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459682563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459912965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459936465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460088967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460343269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460374469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460396969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460425770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460819273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463853798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463983400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464200501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464295002Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464331702Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464352803Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464795906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464850207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464884207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464929707Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464948008Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465078809Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465467012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465770315Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465863515Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465884716Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465898416Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465911416Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465922816Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465936016Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465964116Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465979716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465991216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466002317Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466032417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466048417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466060817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466073817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466093917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466108217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466120718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466132618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466145718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466159818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466170818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466182018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466193718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466207818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466226918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466239919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466250719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466362320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466382920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466470120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466490821Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466502121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466521321Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466787523Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467170726Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467422729Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467502129Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467596330Z" level=info msg="containerd successfully booted in 0.035978s"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.446816884Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.479266357Z" level=info msg="Loading containers: start."
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.611087768Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.727699751Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.826817487Z" level=info msg="Loading containers: done."
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851788197Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851961999Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902179022Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:33 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902385724Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.464303420Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466178836Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466444238Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466617340Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466645140Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:49:43 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:49:44 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:49:44 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:49:44 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.526170370Z" level=info msg="Starting up"
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.527875185Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.529085595Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1449
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.561806771Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.588986100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589119201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589175201Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589189602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589217102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589231002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589372603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589466304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589487004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589498304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589522204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589693506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.592940233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593046334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593164935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593288836Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593325336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593343737Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593655039Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593711040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593798040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593840041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593855841Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593915841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594246644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594583947Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594609647Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594625347Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594640447Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594659648Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594674648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594689748Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594715248Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594831949Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594864649Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594894750Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594912750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594927150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594938550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594949850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594961050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594988850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594999351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595010451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595022151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595034451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595044251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595054151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595064451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595080551Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595100051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595112651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595122752Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595253153Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595360854Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595377754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595405554Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595414854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595426254Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595435454Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595711057Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595836558Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595937958Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595991659Z" level=info msg="containerd successfully booted in 0.035148s"
	Jul 19 03:49:45 functional-149600 dockerd[1443]: time="2024-07-19T03:49:45.571450281Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 03:49:48 functional-149600 dockerd[1443]: time="2024-07-19T03:49:48.883728000Z" level=info msg="Loading containers: start."
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.006401134Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.127192752Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.218929925Z" level=info msg="Loading containers: done."
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249486583Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249683984Z" level=info msg="Daemon has completed initialization"
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299922608Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 03:49:49 functional-149600 systemd[1]: Started Docker Application Container Engine.
	Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299991408Z" level=info msg="API listen on [::]:2376"
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812314634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812468840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.813783594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.814181811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.823808405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826750026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826767127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826866331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899025089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899127893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899277199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899669815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918254477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918562790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.920597373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.921124695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387701734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387801838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387829539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387963045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436646441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436931752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437090859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437275166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539345743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539671255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.540445185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.541481126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550468276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550879792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.551210305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.555850986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.238972986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239834399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239916700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.240127804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589855933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589966535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589987335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.590436642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002502056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002639758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002654558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.003059965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053221935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053490639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053805144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.054875960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794781741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794871142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794886242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794980442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806139221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806918426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807029827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807551631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.625163713Z" level=info msg="ignoring event" container=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.629961233Z" level=info msg="shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631114094Z" level=warning msg="cleaning up after shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631402359Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.674442159Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:50:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.838943371Z" level=info msg="ignoring event" container=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.839886257Z" level=info msg="shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840028839Z" level=warning msg="cleaning up after shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
	Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840046637Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303237678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303415569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.304773802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.305273178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593684961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593784156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593803755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593913350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 03:51:50 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 03:51:50 functional-149600 dockerd[1443]: time="2024-07-19T03:51:50.861615472Z" level=info msg="Processing signal 'terminated'"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079285636Z" level=info msg="shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079436436Z" level=warning msg="cleaning up after shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079453436Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.080991335Z" level=info msg="ignoring event" container=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.090996234Z" level=info msg="ignoring event" container=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091838634Z" level=info msg="shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091953834Z" level=warning msg="cleaning up after shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091968234Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.112734230Z" level=info msg="ignoring event" container=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116127330Z" level=info msg="shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116200030Z" level=warning msg="cleaning up after shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116210930Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116537230Z" level=info msg="shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116585530Z" level=warning msg="cleaning up after shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116614030Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116823530Z" level=info msg="ignoring event" container=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116849530Z" level=info msg="ignoring event" container=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116946330Z" level=info msg="ignoring event" container=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116988930Z" level=info msg="ignoring event" container=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122714429Z" level=info msg="shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122848129Z" level=warning msg="cleaning up after shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122861729Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128254128Z" level=info msg="shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128388728Z" level=warning msg="cleaning up after shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128443128Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131550327Z" level=info msg="shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131620327Z" level=warning msg="cleaning up after shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131665527Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148015624Z" level=info msg="shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148155124Z" level=warning msg="cleaning up after shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148209624Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182402919Z" level=info msg="shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182503119Z" level=warning msg="cleaning up after shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182514319Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183465819Z" level=info msg="shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183548319Z" level=warning msg="cleaning up after shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183560019Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185575018Z" level=info msg="ignoring event" container=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185722318Z" level=info msg="ignoring event" container=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185758518Z" level=info msg="ignoring event" container=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185811118Z" level=info msg="ignoring event" container=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185852418Z" level=info msg="ignoring event" container=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186041918Z" level=info msg="shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186095318Z" level=warning msg="cleaning up after shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186139118Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187552418Z" level=info msg="shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187672518Z" level=warning msg="cleaning up after shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
	Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187687018Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987746429Z" level=info msg="shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987797629Z" level=warning msg="cleaning up after shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987859329Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:51:55 functional-149600 dockerd[1443]: time="2024-07-19T03:51:55.988258129Z" level=info msg="ignoring event" container=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:51:56 functional-149600 dockerd[1449]: time="2024-07-19T03:51:56.011512525Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:51:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.013086308Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.070091705Z" level=info msg="ignoring event" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.070778533Z" level=info msg="shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071068387Z" level=warning msg="cleaning up after shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071124597Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.147257850Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148746827Z" level=info msg="Daemon shutdown complete"
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148999274Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.149087991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 03:52:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Consumed 5.480s CPU time.
	Jul 19 03:52:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:52:02 functional-149600 dockerd[4315]: time="2024-07-19T03:52:02.207309394Z" level=info msg="Starting up"
	Jul 19 03:53:02 functional-149600 dockerd[4315]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:53:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Jul 19 03:53:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:53:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:53:02 functional-149600 dockerd[4517]: time="2024-07-19T03:53:02.427497928Z" level=info msg="Starting up"
	Jul 19 03:54:02 functional-149600 dockerd[4517]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:54:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:54:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:54:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:54:02 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jul 19 03:54:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:54:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:54:02 functional-149600 dockerd[4794]: time="2024-07-19T03:54:02.625522087Z" level=info msg="Starting up"
	Jul 19 03:55:02 functional-149600 dockerd[4794]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:55:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:55:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:55:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:55:02 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
	Jul 19 03:55:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:55:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:55:02 functional-149600 dockerd[5024]: time="2024-07-19T03:55:02.867963022Z" level=info msg="Starting up"
	Jul 19 03:56:02 functional-149600 dockerd[5024]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:56:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:56:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:56:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:56:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
	Jul 19 03:56:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:56:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:56:03 functional-149600 dockerd[5250]: time="2024-07-19T03:56:03.114424888Z" level=info msg="Starting up"
	Jul 19 03:57:03 functional-149600 dockerd[5250]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:57:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:57:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:57:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:57:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
	Jul 19 03:57:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:57:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:57:03 functional-149600 dockerd[5584]: time="2024-07-19T03:57:03.385021046Z" level=info msg="Starting up"
	Jul 19 03:58:03 functional-149600 dockerd[5584]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:58:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:58:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:58:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:58:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
	Jul 19 03:58:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:58:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:58:03 functional-149600 dockerd[5803]: time="2024-07-19T03:58:03.585932078Z" level=info msg="Starting up"
	Jul 19 03:59:03 functional-149600 dockerd[5803]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 03:59:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 03:59:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 03:59:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 03:59:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
	Jul 19 03:59:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 03:59:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 03:59:03 functional-149600 dockerd[6023]: time="2024-07-19T03:59:03.812709134Z" level=info msg="Starting up"
	Jul 19 04:00:03 functional-149600 dockerd[6023]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:00:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:00:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:00:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:00:04 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
	Jul 19 04:00:04 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:00:04 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:00:04 functional-149600 dockerd[6281]: time="2024-07-19T04:00:04.125100395Z" level=info msg="Starting up"
	Jul 19 04:01:04 functional-149600 dockerd[6281]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:01:04 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:01:04 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:01:04 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:01:04 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
	Jul 19 04:01:04 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:01:04 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:01:04 functional-149600 dockerd[6502]: time="2024-07-19T04:01:04.384065143Z" level=info msg="Starting up"
	Jul 19 04:02:04 functional-149600 dockerd[6502]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:02:04 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:02:04 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:02:04 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:02:04 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
	Jul 19 04:02:04 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:02:04 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:02:04 functional-149600 dockerd[6727]: time="2024-07-19T04:02:04.629921832Z" level=info msg="Starting up"
	Jul 19 04:03:04 functional-149600 dockerd[6727]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:03:04 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:03:04 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:03:04 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:03:04 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
	Jul 19 04:03:04 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:03:04 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:03:04 functional-149600 dockerd[6945]: time="2024-07-19T04:03:04.881594773Z" level=info msg="Starting up"
	Jul 19 04:04:04 functional-149600 dockerd[6945]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:04:04 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:04:04 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:04:04 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:04:05 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
	Jul 19 04:04:05 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:04:05 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:04:05 functional-149600 dockerd[7168]: time="2024-07-19T04:04:05.123312469Z" level=info msg="Starting up"
	Jul 19 04:05:05 functional-149600 dockerd[7168]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:05:05 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:05:05 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:05:05 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:05:05 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
	Jul 19 04:05:05 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:05:05 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:05:05 functional-149600 dockerd[7390]: time="2024-07-19T04:05:05.382469694Z" level=info msg="Starting up"
	Jul 19 04:06:05 functional-149600 dockerd[7390]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:06:05 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:06:05 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:06:05 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:06:05 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
	Jul 19 04:06:05 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:06:05 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:06:05 functional-149600 dockerd[7633]: time="2024-07-19T04:06:05.593228245Z" level=info msg="Starting up"
	Jul 19 04:07:05 functional-149600 dockerd[7633]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:07:05 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:07:05 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:07:05 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:07:05 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
	Jul 19 04:07:05 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:07:05 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:07:05 functional-149600 dockerd[7873]: time="2024-07-19T04:07:05.880412514Z" level=info msg="Starting up"
	Jul 19 04:08:05 functional-149600 dockerd[7873]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:08:05 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:08:05 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:08:05 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:08:06 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
	Jul 19 04:08:06 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:08:06 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:08:06 functional-149600 dockerd[8117]: time="2024-07-19T04:08:06.127986862Z" level=info msg="Starting up"
	Jul 19 04:09:06 functional-149600 dockerd[8117]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:09:06 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:09:06 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:09:06 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:09:06 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
	Jul 19 04:09:06 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:09:06 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:09:06 functional-149600 dockerd[8352]: time="2024-07-19T04:09:06.371958374Z" level=info msg="Starting up"
	Jul 19 04:10:06 functional-149600 dockerd[8352]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:10:06 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:10:06 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:10:06 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:10:06 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
	Jul 19 04:10:06 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:10:06 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:10:06 functional-149600 dockerd[8667]: time="2024-07-19T04:10:06.620432494Z" level=info msg="Starting up"
	Jul 19 04:11:06 functional-149600 dockerd[8667]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:11:06 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:11:06 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:11:06 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:11:06 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
	Jul 19 04:11:06 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:11:06 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:11:06 functional-149600 dockerd[8889]: time="2024-07-19T04:11:06.842404443Z" level=info msg="Starting up"
	Jul 19 04:12:06 functional-149600 dockerd[8889]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:12:06 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:12:06 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:12:06 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:12:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
	Jul 19 04:12:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:12:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:12:07 functional-149600 dockerd[9109]: time="2024-07-19T04:12:07.102473619Z" level=info msg="Starting up"
	Jul 19 04:13:07 functional-149600 dockerd[9109]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:13:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:13:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:13:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:13:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
	Jul 19 04:13:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:13:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:13:07 functional-149600 dockerd[9440]: time="2024-07-19T04:13:07.376165478Z" level=info msg="Starting up"
	Jul 19 04:14:07 functional-149600 dockerd[9440]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:14:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:14:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:14:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:14:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
	Jul 19 04:14:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:14:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:14:07 functional-149600 dockerd[9662]: time="2024-07-19T04:14:07.590302364Z" level=info msg="Starting up"
	Jul 19 04:15:07 functional-149600 dockerd[9662]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:15:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:15:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:15:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:15:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
	Jul 19 04:15:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:15:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:15:07 functional-149600 dockerd[9879]: time="2024-07-19T04:15:07.829795571Z" level=info msg="Starting up"
	Jul 19 04:16:07 functional-149600 dockerd[9879]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:16:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:16:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:16:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:16:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
	Jul 19 04:16:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:16:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:16:08 functional-149600 dockerd[10215]: time="2024-07-19T04:16:08.121334668Z" level=info msg="Starting up"
	Jul 19 04:17:08 functional-149600 dockerd[10215]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:17:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:17:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:17:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:17:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
	Jul 19 04:17:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:17:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:17:08 functional-149600 dockerd[10435]: time="2024-07-19T04:17:08.312026488Z" level=info msg="Starting up"
	Jul 19 04:18:08 functional-149600 dockerd[10435]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:18:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:18:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:18:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:18:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 26.
	Jul 19 04:18:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:18:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:18:08 functional-149600 dockerd[10658]: time="2024-07-19T04:18:08.567478720Z" level=info msg="Starting up"
	Jul 19 04:19:08 functional-149600 dockerd[10658]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:19:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:19:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:19:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 19 04:19:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 27.
	Jul 19 04:19:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:19:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:19:08 functional-149600 dockerd[11028]: time="2024-07-19T04:19:08.881713903Z" level=info msg="Starting up"
	Jul 19 04:19:41 functional-149600 dockerd[11028]: time="2024-07-19T04:19:41.104825080Z" level=info msg="Processing signal 'terminated'"
	Jul 19 04:20:08 functional-149600 dockerd[11028]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:20:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:20:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:20:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:20:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:20:08 functional-149600 dockerd[11475]: time="2024-07-19T04:20:08.959849556Z" level=info msg="Starting up"
	Jul 19 04:21:08 functional-149600 dockerd[11475]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:21:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:21:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:21:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0719 04:21:09.089413   11908 out.go:239] * 
	W0719 04:21:09.091413   11908 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 04:21:09.095524   11908 out.go:177] 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:25:22.985792    7728 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0719 04:26:10.416636    7728 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:26:10.456488    7728 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:26:10.488188    7728 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:26:10.517144    7728 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0719 04:26:10.552693    7728 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
functional_test.go:1234: out/minikube-windows-amd64.exe -p functional-149600 logs failed: exit status 1
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
| start   | --download-only -p                                                                          | binary-mirror-056600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC |                     |
|         | binary-mirror-056600                                                                        |                      |                   |         |                     |                     |
|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
|         | http://127.0.0.1:58266                                                                      |                      |                   |         |                     |                     |
|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
| delete  | -p binary-mirror-056600                                                                     | binary-mirror-056600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC | 19 Jul 24 03:27 UTC |
| addons  | disable dashboard -p                                                                        | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC |                     |
|         | addons-811100                                                                               |                      |                   |         |                     |                     |
| addons  | enable dashboard -p                                                                         | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC |                     |
|         | addons-811100                                                                               |                      |                   |         |                     |                     |
| start   | -p addons-811100 --wait=true                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC | 19 Jul 24 03:35 UTC |
|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
|         | --addons=yakd --addons=volcano                                                              |                      |                   |         |                     |                     |
|         | --driver=hyperv --addons=ingress                                                            |                      |                   |         |                     |                     |
|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
| addons  | disable nvidia-device-plugin                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:35 UTC |
|         | -p addons-811100                                                                            |                      |                   |         |                     |                     |
| ip      | addons-811100 ip                                                                            | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:35 UTC |
| addons  | addons-811100 addons disable                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:35 UTC |
|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
|         | -v=1                                                                                        |                      |                   |         |                     |                     |
| ssh     | addons-811100 ssh cat                                                                       | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:35 UTC |
|         | /opt/local-path-provisioner/pvc-114f4030-a1d1-4247-ab71-0d8af834e357_default_test-pvc/file1 |                      |                   |         |                     |                     |
| addons  | disable cloud-spanner -p                                                                    | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:35 UTC |
|         | addons-811100                                                                               |                      |                   |         |                     |                     |
| addons  | addons-811100 addons disable                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:36 UTC |
|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| addons  | enable headlamp                                                                             | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:35 UTC | 19 Jul 24 03:36 UTC |
|         | -p addons-811100                                                                            |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| addons  | addons-811100 addons                                                                        | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:36 UTC | 19 Jul 24 03:37 UTC |
|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| addons  | addons-811100 addons disable                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:36 UTC | 19 Jul 24 03:37 UTC |
|         | helm-tiller --alsologtostderr                                                               |                      |                   |         |                     |                     |
|         | -v=1                                                                                        |                      |                   |         |                     |                     |
| addons  | disable inspektor-gadget -p                                                                 | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
|         | addons-811100                                                                               |                      |                   |         |                     |                     |
| addons  | addons-811100 addons disable                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
|         | volcano --alsologtostderr -v=1                                                              |                      |                   |         |                     |                     |
| addons  | addons-811100 addons                                                                        | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
|         | disable csi-hostpath-driver                                                                 |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| ssh     | addons-811100 ssh curl -s                                                                   | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |                   |         |                     |                     |
|         | nginx.example.com'                                                                          |                      |                   |         |                     |                     |
| ip      | addons-811100 ip                                                                            | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
| addons  | addons-811100 addons disable                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
|         | ingress-dns --alsologtostderr                                                               |                      |                   |         |                     |                     |
|         | -v=1                                                                                        |                      |                   |         |                     |                     |
| addons  | addons-811100 addons                                                                        | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:37 UTC |
|         | disable volumesnapshots                                                                     |                      |                   |         |                     |                     |
|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
| addons  | addons-811100 addons disable                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:37 UTC | 19 Jul 24 03:38 UTC |
|         | ingress --alsologtostderr -v=1                                                              |                      |                   |         |                     |                     |
| addons  | addons-811100 addons disable                                                                | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:38 UTC | 19 Jul 24 03:38 UTC |
|         | gcp-auth --alsologtostderr                                                                  |                      |                   |         |                     |                     |
|         | -v=1                                                                                        |                      |                   |         |                     |                     |
| stop    | -p addons-811100                                                                            | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:38 UTC | 19 Jul 24 03:39 UTC |
| addons  | enable dashboard -p                                                                         | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:39 UTC | 19 Jul 24 03:39 UTC |
|         | addons-811100                                                                               |                      |                   |         |                     |                     |
| addons  | disable dashboard -p                                                                        | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:39 UTC | 19 Jul 24 03:39 UTC |
|         | addons-811100                                                                               |                      |                   |         |                     |                     |
| addons  | disable gvisor -p                                                                           | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:39 UTC | 19 Jul 24 03:39 UTC |
|         | addons-811100                                                                               |                      |                   |         |                     |                     |
| delete  | -p addons-811100                                                                            | addons-811100        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:39 UTC | 19 Jul 24 03:40 UTC |
| start   | -p nospam-907600 -n=1 --memory=2250 --wait=false                                            | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:40 UTC | 19 Jul 24 03:43 UTC |
|         | --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                       |                      |                   |         |                     |                     |
|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
| start   | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:43 UTC |                     |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
| start   | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC |                     |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
| start   | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC |                     |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
|         | start --dry-run                                                                             |                      |                   |         |                     |                     |
| pause   | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:44 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
|         | pause                                                                                       |                      |                   |         |                     |                     |
| pause   | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:44 UTC | 19 Jul 24 03:45 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
|         | pause                                                                                       |                      |                   |         |                     |                     |
| pause   | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
|         | pause                                                                                       |                      |                   |         |                     |                     |
| unpause | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
|         | unpause                                                                                     |                      |                   |         |                     |                     |
| unpause | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
|         | unpause                                                                                     |                      |                   |         |                     |                     |
| unpause | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:45 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
|         | unpause                                                                                     |                      |                   |         |                     |                     |
| stop    | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:45 UTC | 19 Jul 24 03:46 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
|         | stop                                                                                        |                      |                   |         |                     |                     |
| stop    | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
|         | stop                                                                                        |                      |                   |         |                     |                     |
| stop    | nospam-907600 --log_dir                                                                     | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600                                 |                      |                   |         |                     |                     |
|         | stop                                                                                        |                      |                   |         |                     |                     |
| delete  | -p nospam-907600                                                                            | nospam-907600        | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:46 UTC |
| start   | -p functional-149600                                                                        | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:46 UTC | 19 Jul 24 03:50 UTC |
|         | --memory=4000                                                                               |                      |                   |         |                     |                     |
|         | --apiserver-port=8441                                                                       |                      |                   |         |                     |                     |
|         | --wait=all --driver=hyperv                                                                  |                      |                   |         |                     |                     |
| start   | -p functional-149600                                                                        | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:50 UTC |                     |
|         | --alsologtostderr -v=8                                                                      |                      |                   |         |                     |                     |
| cache   | functional-149600 cache add                                                                 | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:59 UTC | 19 Jul 24 04:01 UTC |
|         | registry.k8s.io/pause:3.1                                                                   |                      |                   |         |                     |                     |
| cache   | functional-149600 cache add                                                                 | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:01 UTC | 19 Jul 24 04:03 UTC |
|         | registry.k8s.io/pause:3.3                                                                   |                      |                   |         |                     |                     |
| cache   | functional-149600 cache add                                                                 | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:03 UTC | 19 Jul 24 04:05 UTC |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
| cache   | functional-149600 cache add                                                                 | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:05 UTC | 19 Jul 24 04:06 UTC |
|         | minikube-local-cache-test:functional-149600                                                 |                      |                   |         |                     |                     |
| cache   | functional-149600 cache delete                                                              | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC | 19 Jul 24 04:06 UTC |
|         | minikube-local-cache-test:functional-149600                                                 |                      |                   |         |                     |                     |
| cache   | delete                                                                                      | minikube             | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC | 19 Jul 24 04:06 UTC |
|         | registry.k8s.io/pause:3.3                                                                   |                      |                   |         |                     |                     |
| cache   | list                                                                                        | minikube             | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC | 19 Jul 24 04:06 UTC |
| ssh     | functional-149600 ssh sudo                                                                  | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC |                     |
|         | crictl images                                                                               |                      |                   |         |                     |                     |
| ssh     | functional-149600                                                                           | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:06 UTC |                     |
|         | ssh sudo docker rmi                                                                         |                      |                   |         |                     |                     |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
| ssh     | functional-149600 ssh                                                                       | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:07 UTC |                     |
|         | sudo crictl inspecti                                                                        |                      |                   |         |                     |                     |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
| cache   | functional-149600 cache reload                                                              | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:07 UTC | 19 Jul 24 04:09 UTC |
| ssh     | functional-149600 ssh                                                                       | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:09 UTC |                     |
|         | sudo crictl inspecti                                                                        |                      |                   |         |                     |                     |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
| cache   | delete                                                                                      | minikube             | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:09 UTC | 19 Jul 24 04:09 UTC |
|         | registry.k8s.io/pause:3.1                                                                   |                      |                   |         |                     |                     |
| cache   | delete                                                                                      | minikube             | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:09 UTC | 19 Jul 24 04:09 UTC |
|         | registry.k8s.io/pause:latest                                                                |                      |                   |         |                     |                     |
| kubectl | functional-149600 kubectl --                                                                | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:12 UTC |                     |
|         | --context functional-149600                                                                 |                      |                   |         |                     |                     |
|         | get pods                                                                                    |                      |                   |         |                     |                     |
| start   | -p functional-149600                                                                        | functional-149600    | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:18 UTC |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision                    |                      |                   |         |                     |                     |
|         | --wait=all                                                                                  |                      |                   |         |                     |                     |
|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/07/19 04:18:21
Running on machine: minikube6
Binary: Built with gc go1.22.5 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0719 04:18:21.766878   11908 out.go:291] Setting OutFile to fd 508 ...
I0719 04:18:21.767672   11908 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:18:21.767672   11908 out.go:304] Setting ErrFile to fd 628...
I0719 04:18:21.767704   11908 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0719 04:18:21.802747   11908 out.go:298] Setting JSON to false
I0719 04:18:21.806931   11908 start.go:129] hostinfo: {"hostname":"minikube6","uptime":21727,"bootTime":1721340973,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
W0719 04:18:21.807048   11908 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0719 04:18:21.811368   11908 out.go:177] * [functional-149600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
I0719 04:18:21.814314   11908 notify.go:220] Checking for updates...
I0719 04:18:21.815240   11908 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
I0719 04:18:21.817700   11908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0719 04:18:21.821305   11908 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
I0719 04:18:21.823619   11908 out.go:177]   - MINIKUBE_LOCATION=19302
I0719 04:18:21.827233   11908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0719 04:18:21.830726   11908 config.go:182] Loaded profile config "functional-149600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 04:18:21.830936   11908 driver.go:392] Setting default libvirt URI to qemu:///system
I0719 04:18:27.199268   11908 out.go:177] * Using the hyperv driver based on existing profile
I0719 04:18:27.203184   11908 start.go:297] selected driver: hyperv
I0719 04:18:27.203184   11908 start.go:901] validating driver "hyperv" against &{Name:functional-149600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.3 ClusterName:functional-149600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.160.82 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0719 04:18:27.203184   11908 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0719 04:18:27.252525   11908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0719 04:18:27.252525   11908 cni.go:84] Creating CNI manager for ""
I0719 04:18:27.252525   11908 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0719 04:18:27.252525   11908 start.go:340] cluster config:
{Name:functional-149600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-149600 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.160.82 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0719 04:18:27.253102   11908 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0719 04:18:27.258660   11908 out.go:177] * Starting "functional-149600" primary control-plane node in "functional-149600" cluster
I0719 04:18:27.260254   11908 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0719 04:18:27.261246   11908 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
I0719 04:18:27.261246   11908 cache.go:56] Caching tarball of preloaded images
I0719 04:18:27.261246   11908 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0719 04:18:27.261246   11908 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0719 04:18:27.261246   11908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-149600\config.json ...
I0719 04:18:27.263301   11908 start.go:360] acquireMachinesLock for functional-149600: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0719 04:18:27.263301   11908 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-149600"
I0719 04:18:27.263301   11908 start.go:96] Skipping create...Using existing machine configuration
I0719 04:18:27.263301   11908 fix.go:54] fixHost starting: 
I0719 04:18:27.264677   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
I0719 04:18:30.104551   11908 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0719 04:18:30.104551   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:18:30.104990   11908 fix.go:112] recreateIfNeeded on functional-149600: state=Running err=<nil>
W0719 04:18:30.104990   11908 fix.go:138] unexpected machine state, will restart: <nil>
I0719 04:18:30.108813   11908 out.go:177] * Updating the running hyperv "functional-149600" VM ...
I0719 04:18:30.112780   11908 machine.go:94] provisionDockerMachine start ...
I0719 04:18:30.112780   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
I0719 04:18:32.281432   11908 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0719 04:18:32.282235   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:18:32.282235   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
I0719 04:18:34.875102   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82

                                                
                                                
I0719 04:18:34.875610   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:18:34.880845   11908 main.go:141] libmachine: Using SSH client type: native
I0719 04:18:34.881446   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
I0719 04:18:34.881446   11908 main.go:141] libmachine: About to run SSH command:
hostname
I0719 04:18:35.021234   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-149600

                                                
                                                
I0719 04:18:35.021324   11908 buildroot.go:166] provisioning hostname "functional-149600"
I0719 04:18:35.021324   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
I0719 04:18:37.170899   11908 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0719 04:18:37.170899   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:18:37.171700   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
I0719 04:18:39.728642   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82

                                                
                                                
I0719 04:18:39.728642   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:18:39.734540   11908 main.go:141] libmachine: Using SSH client type: native
I0719 04:18:39.735114   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
I0719 04:18:39.735114   11908 main.go:141] libmachine: About to run SSH command:
sudo hostname functional-149600 && echo "functional-149600" | sudo tee /etc/hostname
I0719 04:18:39.893752   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-149600

                                                
                                                
I0719 04:18:39.893752   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
I0719 04:18:42.020647   11908 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0719 04:18:42.021626   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:18:42.021626   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
I0719 04:18:44.602600   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82

                                                
                                                
I0719 04:18:44.602600   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:18:44.610065   11908 main.go:141] libmachine: Using SSH client type: native
I0719 04:18:44.610065   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
I0719 04:18:44.610065   11908 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sfunctional-149600' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-149600/g' /etc/hosts;
			else 
				echo '127.0.1.1 functional-149600' | sudo tee -a /etc/hosts; 
			fi
		fi
I0719 04:18:44.753558   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0719 04:18:44.753558   11908 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
I0719 04:18:44.753558   11908 buildroot.go:174] setting up certificates
I0719 04:18:44.753558   11908 provision.go:84] configureAuth start
I0719 04:18:44.753558   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
I0719 04:18:46.923705   11908 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0719 04:18:46.923915   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:18:46.923915   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
I0719 04:18:49.456995   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82

                                                
                                                
I0719 04:18:49.457146   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:18:49.457146   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
I0719 04:18:51.630822   11908 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0719 04:18:51.630822   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:18:51.631464   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
I0719 04:18:54.211617   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82

                                                
                                                
I0719 04:18:54.211617   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:18:54.211905   11908 provision.go:143] copyHostCerts
I0719 04:18:54.222331   11908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
I0719 04:18:54.222331   11908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
I0719 04:18:54.223238   11908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
I0719 04:18:54.233187   11908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
I0719 04:18:54.233187   11908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
I0719 04:18:54.233187   11908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
I0719 04:18:54.242612   11908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
I0719 04:18:54.242612   11908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
I0719 04:18:54.242944   11908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
I0719 04:18:54.244582   11908 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-149600 san=[127.0.0.1 172.28.160.82 functional-149600 localhost minikube]
I0719 04:18:54.390527   11908 provision.go:177] copyRemoteCerts
I0719 04:18:54.401534   11908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0719 04:18:54.401534   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
I0719 04:18:56.573340   11908 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0719 04:18:56.573340   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:18:56.573340   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
I0719 04:18:59.164667   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82

                                                
                                                
I0719 04:18:59.164667   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:18:59.166132   11908 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
I0719 04:18:59.268712   11908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8670097s)
I0719 04:18:59.269401   11908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0719 04:18:59.315850   11908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
I0719 04:18:59.360883   11908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0719 04:18:59.404491   11908 provision.go:87] duration metric: took 14.650763s to configureAuth
I0719 04:18:59.404491   11908 buildroot.go:189] setting minikube options for container-runtime
I0719 04:18:59.405435   11908 config.go:182] Loaded profile config "functional-149600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0719 04:18:59.405475   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
I0719 04:19:01.593936   11908 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0719 04:19:01.593936   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:19:01.594118   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
I0719 04:19:04.230442   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82

                                                
                                                
I0719 04:19:04.230442   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:19:04.239294   11908 main.go:141] libmachine: Using SSH client type: native
I0719 04:19:04.239760   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
I0719 04:19:04.239760   11908 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0719 04:19:04.380052   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0719 04:19:04.380052   11908 buildroot.go:70] root file system type: tmpfs
I0719 04:19:04.380261   11908 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0719 04:19:04.380347   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
I0719 04:19:06.539303   11908 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0719 04:19:06.539515   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:19:06.539515   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
I0719 04:19:09.127043   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82

                                                
                                                
I0719 04:19:09.127043   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:19:09.133308   11908 main.go:141] libmachine: Using SSH client type: native
I0719 04:19:09.133448   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
I0719 04:19:09.133448   11908 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0719 04:19:09.306386   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0719 04:19:09.306918   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
I0719 04:19:11.508256   11908 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0719 04:19:11.508256   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:19:11.509277   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
I0719 04:19:14.068121   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82

                                                
                                                
I0719 04:19:14.068121   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:19:14.074438   11908 main.go:141] libmachine: Using SSH client type: native
I0719 04:19:14.075220   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
I0719 04:19:14.075220   11908 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0719 04:19:14.221726   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0719 04:19:14.221726   11908 machine.go:97] duration metric: took 44.1084347s to provisionDockerMachine
I0719 04:19:14.221726   11908 start.go:293] postStartSetup for "functional-149600" (driver="hyperv")
I0719 04:19:14.221726   11908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0719 04:19:14.235570   11908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0719 04:19:14.235570   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
I0719 04:19:16.392176   11908 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0719 04:19:16.393208   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:19:16.393208   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
I0719 04:19:18.931070   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82

                                                
                                                
I0719 04:19:18.931973   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:19:18.932493   11908 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
I0719 04:19:19.038434   11908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8028089s)
I0719 04:19:19.053175   11908 ssh_runner.go:195] Run: cat /etc/os-release
I0719 04:19:19.060791   11908 info.go:137] Remote host: Buildroot 2023.02.9
I0719 04:19:19.060791   11908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
I0719 04:19:19.060791   11908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
I0719 04:19:19.061625   11908 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
I0719 04:19:19.064733   11908 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts -> hosts in /etc/test/nested/copy/9604
I0719 04:19:19.078488   11908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9604
I0719 04:19:19.096657   11908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
I0719 04:19:19.141481   11908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts --> /etc/test/nested/copy/9604/hosts (40 bytes)
I0719 04:19:19.186154   11908 start.go:296] duration metric: took 4.9643708s for postStartSetup
I0719 04:19:19.186154   11908 fix.go:56] duration metric: took 51.9222508s for fixHost
I0719 04:19:19.186154   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
I0719 04:19:21.337933   11908 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0719 04:19:21.337933   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:19:21.337933   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
I0719 04:19:23.870420   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82

                                                
                                                
I0719 04:19:23.870420   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:19:23.875775   11908 main.go:141] libmachine: Using SSH client type: native
I0719 04:19:23.876403   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
I0719 04:19:23.876403   11908 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0719 04:19:24.012488   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721362764.016863919

                                                
                                                
I0719 04:19:24.012488   11908 fix.go:216] guest clock: 1721362764.016863919
I0719 04:19:24.012488   11908 fix.go:229] Guest: 2024-07-19 04:19:24.016863919 +0000 UTC Remote: 2024-07-19 04:19:19.1861548 +0000 UTC m=+57.580185601 (delta=4.830709119s)
I0719 04:19:24.012488   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
I0719 04:19:26.182442   11908 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0719 04:19:26.182676   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:19:26.182676   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
I0719 04:19:28.790985   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82

                                                
                                                
I0719 04:19:28.790985   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:19:28.797512   11908 main.go:141] libmachine: Using SSH client type: native
I0719 04:19:28.798275   11908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.160.82 22 <nil> <nil>}
I0719 04:19:28.798275   11908 main.go:141] libmachine: About to run SSH command:
sudo date -s @1721362764
I0719 04:19:28.954624   11908 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 04:19:24 UTC 2024

                                                
                                                
I0719 04:19:28.954624   11908 fix.go:236] clock set: Fri Jul 19 04:19:24 UTC 2024
(err=<nil>)
I0719 04:19:28.954624   11908 start.go:83] releasing machines lock for "functional-149600", held for 1m1.6906073s
I0719 04:19:28.954952   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
I0719 04:19:31.171228   11908 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0719 04:19:31.171393   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:19:31.171393   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
I0719 04:19:34.042286   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82

                                                
                                                
I0719 04:19:34.042286   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:19:34.047433   11908 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
I0719 04:19:34.047433   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
I0719 04:19:34.059846   11908 ssh_runner.go:195] Run: cat /version.json
I0719 04:19:34.059846   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-149600 ).state
I0719 04:19:36.423615   11908 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0719 04:19:36.423615   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:19:36.423615   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
I0719 04:19:36.523606   11908 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0719 04:19:36.523628   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:19:36.523684   11908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-149600 ).networkadapters[0]).ipaddresses[0]
I0719 04:19:39.126747   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82

                                                
                                                
I0719 04:19:39.126747   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:19:39.127737   11908 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
I0719 04:19:39.227169   11908 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1795571s)
W0719 04:19:39.227169   11908 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
stdout:

                                                
                                                
stderr:
bash: line 1: curl.exe: command not found
I0719 04:19:39.258833   11908 main.go:141] libmachine: [stdout =====>] : 172.28.160.82

                                                
                                                
I0719 04:19:39.258833   11908 main.go:141] libmachine: [stderr =====>] : 
I0719 04:19:39.259777   11908 sshutil.go:53] new ssh client: &{IP:172.28.160.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-149600\id_rsa Username:docker}
W0719 04:19:39.339726   11908 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
W0719 04:19:39.339839   11908 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
I0719 04:19:39.353457   11908 ssh_runner.go:235] Completed: cat /version.json: (5.2935496s)
I0719 04:19:39.364534   11908 ssh_runner.go:195] Run: systemctl --version
I0719 04:19:39.384482   11908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0719 04:19:39.392154   11908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0719 04:19:39.403112   11908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0719 04:19:39.419839   11908 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0719 04:19:39.419839   11908 start.go:495] detecting cgroup driver to use...
I0719 04:19:39.420108   11908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0719 04:19:39.467456   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0719 04:19:39.499239   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0719 04:19:39.522220   11908 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0719 04:19:39.533043   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0719 04:19:39.562936   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0719 04:19:39.594192   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0719 04:19:39.624160   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0719 04:19:39.654835   11908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0719 04:19:39.684342   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0719 04:19:39.714405   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0719 04:19:39.744483   11908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0719 04:19:39.773149   11908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0719 04:19:39.804037   11908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0719 04:19:39.833349   11908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0719 04:19:40.058814   11908 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0719 04:19:40.099575   11908 start.go:495] detecting cgroup driver to use...
I0719 04:19:40.111657   11908 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0719 04:19:40.147724   11908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0719 04:19:40.182021   11908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0719 04:19:40.219208   11908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0719 04:19:40.252518   11908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0719 04:19:40.274665   11908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0719 04:19:40.317754   11908 ssh_runner.go:195] Run: which cri-dockerd
I0719 04:19:40.334468   11908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0719 04:19:40.352225   11908 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0719 04:19:40.391447   11908 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0719 04:19:40.611469   11908 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0719 04:19:40.826485   11908 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0719 04:19:40.826637   11908 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0719 04:19:40.870608   11908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0719 04:19:41.079339   11908 ssh_runner.go:195] Run: sudo systemctl restart docker
I0719 04:21:08.989750   11908 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m27.9093148s)
I0719 04:21:09.002113   11908 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
I0719 04:21:09.085419   11908 out.go:177] 
W0719 04:21:09.088414   11908 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.

                                                
                                                
sudo journalctl --no-pager -u docker:
-- stdout --
Jul 19 03:48:58 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.531914350Z" level=info msg="Starting up"
Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.534422132Z" level=info msg="containerd not running, starting managed containerd"
Jul 19 03:48:58 functional-149600 dockerd[677]: time="2024-07-19T03:48:58.535803677Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=684
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.567717825Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594617108Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594655809Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594718511Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594736112Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.594817914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595026521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595269429Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595407134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595431535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595445135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595540038Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.595881749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598812246Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.598917149Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599162457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599284261Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599462867Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.599605372Z" level=info msg="metadata content store policy set" policy=shared
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625338316Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625549423Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625577124Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625596425Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625614725Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.625734329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626111642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626552556Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626708661Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626731962Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626749163Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626764763Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626779864Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626807165Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626826665Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626842566Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626857266Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626871767Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626901168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626925168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626942469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626958269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626972470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.626986970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627018171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627050773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627067473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627087974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627102874Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627118075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627133475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627151576Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627179977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627207478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627223378Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627308681Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627497987Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627603491Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627628491Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627642192Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627659693Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.627677693Z" level=info msg="NRI interface is disabled by configuration."
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628139708Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628464419Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628586223Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jul 19 03:48:58 functional-149600 dockerd[684]: time="2024-07-19T03:48:58.628648825Z" level=info msg="containerd successfully booted in 0.062295s"
Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.605880874Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.640734047Z" level=info msg="Loading containers: start."
Jul 19 03:48:59 functional-149600 dockerd[677]: time="2024-07-19T03:48:59.813575066Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.031273218Z" level=info msg="Loading containers: done."
Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052569890Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.052711603Z" level=info msg="Daemon has completed initialization"
Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174428772Z" level=info msg="API listen on /var/run/docker.sock"
Jul 19 03:49:00 functional-149600 systemd[1]: Started Docker Application Container Engine.
Jul 19 03:49:00 functional-149600 dockerd[677]: time="2024-07-19T03:49:00.174659093Z" level=info msg="API listen on [::]:2376"
Jul 19 03:49:31 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.327916124Z" level=info msg="Processing signal 'terminated'"
Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.330803748Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332114659Z" level=info msg="Daemon shutdown complete"
Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332413462Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jul 19 03:49:31 functional-149600 dockerd[677]: time="2024-07-19T03:49:31.332761765Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jul 19 03:49:32 functional-149600 systemd[1]: docker.service: Deactivated successfully.
Jul 19 03:49:32 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 03:49:32 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.396667332Z" level=info msg="Starting up"
Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.397798042Z" level=info msg="containerd not running, starting managed containerd"
Jul 19 03:49:32 functional-149600 dockerd[1090]: time="2024-07-19T03:49:32.402462181Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1096
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.432470534Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459514962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459615563Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459667563Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459682563Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459912965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.459936465Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460088967Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460343269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460374469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460396969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460425770Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.460819273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463853798Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.463983400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464200501Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464295002Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464331702Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464352803Z" level=info msg="metadata content store policy set" policy=shared
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464795906Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464850207Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464884207Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464929707Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.464948008Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465078809Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465467012Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465770315Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465863515Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465884716Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465898416Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465911416Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465922816Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465936016Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465964116Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465979716Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.465991216Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466002317Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466032417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466048417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466060817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466073817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466093917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466108217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466120718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466132618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466145718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466159818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466170818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466182018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466193718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466207818Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466226918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466239919Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466250719Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466362320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466382920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466470120Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466490821Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466502121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466521321Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.466787523Z" level=info msg="NRI interface is disabled by configuration."
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467170726Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467422729Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467502129Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jul 19 03:49:32 functional-149600 dockerd[1096]: time="2024-07-19T03:49:32.467596330Z" level=info msg="containerd successfully booted in 0.035978s"
Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.446816884Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.479266357Z" level=info msg="Loading containers: start."
Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.611087768Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.727699751Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.826817487Z" level=info msg="Loading containers: done."
Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851788197Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.851961999Z" level=info msg="Daemon has completed initialization"
Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902179022Z" level=info msg="API listen on /var/run/docker.sock"
Jul 19 03:49:33 functional-149600 systemd[1]: Started Docker Application Container Engine.
Jul 19 03:49:33 functional-149600 dockerd[1090]: time="2024-07-19T03:49:33.902385724Z" level=info msg="API listen on [::]:2376"
Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.464303420Z" level=info msg="Processing signal 'terminated'"
Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466178836Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466444238Z" level=info msg="Daemon shutdown complete"
Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466617340Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jul 19 03:49:43 functional-149600 dockerd[1090]: time="2024-07-19T03:49:43.466645140Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jul 19 03:49:43 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
Jul 19 03:49:44 functional-149600 systemd[1]: docker.service: Deactivated successfully.
Jul 19 03:49:44 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 03:49:44 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.526170370Z" level=info msg="Starting up"
Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.527875185Z" level=info msg="containerd not running, starting managed containerd"
Jul 19 03:49:44 functional-149600 dockerd[1443]: time="2024-07-19T03:49:44.529085595Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1449
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.561806771Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.588986100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589119201Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589175201Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589189602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589217102Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589231002Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589372603Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589466304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589487004Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589498304Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589522204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.589693506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.592940233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593046334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593164935Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593288836Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593325336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593343737Z" level=info msg="metadata content store policy set" policy=shared
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593655039Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593711040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593798040Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593840041Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593855841Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.593915841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594246644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594583947Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594609647Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594625347Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594640447Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594659648Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594674648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594689748Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594715248Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594831949Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594864649Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594894750Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594912750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594927150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594938550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594949850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594961050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594988850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.594999351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595010451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595022151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595034451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595044251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595054151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595064451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595080551Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595100051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595112651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595122752Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595253153Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595360854Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595377754Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595405554Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595414854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595426254Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595435454Z" level=info msg="NRI interface is disabled by configuration."
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595711057Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595836558Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595937958Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Jul 19 03:49:44 functional-149600 dockerd[1449]: time="2024-07-19T03:49:44.595991659Z" level=info msg="containerd successfully booted in 0.035148s"
Jul 19 03:49:45 functional-149600 dockerd[1443]: time="2024-07-19T03:49:45.571450281Z" level=info msg="[graphdriver] trying configured driver: overlay2"
Jul 19 03:49:48 functional-149600 dockerd[1443]: time="2024-07-19T03:49:48.883728000Z" level=info msg="Loading containers: start."
Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.006401134Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.127192752Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.218929925Z" level=info msg="Loading containers: done."
Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249486583Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.249683984Z" level=info msg="Daemon has completed initialization"
Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299922608Z" level=info msg="API listen on /var/run/docker.sock"
Jul 19 03:49:49 functional-149600 systemd[1]: Started Docker Application Container Engine.
Jul 19 03:49:49 functional-149600 dockerd[1443]: time="2024-07-19T03:49:49.299991408Z" level=info msg="API listen on [::]:2376"
Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812314634Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.812468840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.813783594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.814181811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.823808405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826750026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826767127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.826866331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899025089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899127893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899277199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.899669815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918254477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.918562790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.920597373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:49:57 functional-149600 dockerd[1449]: time="2024-07-19T03:49:57.921124695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387701734Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387801838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387829539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.387963045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436646441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.436931752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437090859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.437275166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539345743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.539671255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.540445185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.541481126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550468276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.550879792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.551210305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:49:58 functional-149600 dockerd[1449]: time="2024-07-19T03:49:58.555850986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.238972986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239834399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.239916700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.240127804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589855933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589966535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.589987335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:50:20 functional-149600 dockerd[1449]: time="2024-07-19T03:50:20.590436642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002502056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002639758Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.002654558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.003059965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053221935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053490639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.053805144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.054875960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794781741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794871142Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794886242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.794980442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806139221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.806918426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807029827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:50:21 functional-149600 dockerd[1449]: time="2024-07-19T03:50:21.807551631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.625163713Z" level=info msg="ignoring event" container=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.629961233Z" level=info msg="shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631114094Z" level=warning msg="cleaning up after shim disconnected" id=ba286af7ebf4f3d269da9e95499560b441ca4900d0ee2ca03e21d1873c8ce9dd namespace=moby
Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.631402359Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.674442159Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:50:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
Jul 19 03:50:27 functional-149600 dockerd[1443]: time="2024-07-19T03:50:27.838943371Z" level=info msg="ignoring event" container=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.839886257Z" level=info msg="shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840028839Z" level=warning msg="cleaning up after shim disconnected" id=082cf4c72b5877fdb0581db4a220fbc38425b3c6e708dcc483b624be92864d0b namespace=moby
Jul 19 03:50:27 functional-149600 dockerd[1449]: time="2024-07-19T03:50:27.840046637Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303237678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.303415569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.304773802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.305273178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593684961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593784156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593803755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:50:28 functional-149600 dockerd[1449]: time="2024-07-19T03:50:28.593913350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jul 19 03:51:50 functional-149600 systemd[1]: Stopping Docker Application Container Engine...
Jul 19 03:51:50 functional-149600 dockerd[1443]: time="2024-07-19T03:51:50.861615472Z" level=info msg="Processing signal 'terminated'"
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079285636Z" level=info msg="shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079436436Z" level=warning msg="cleaning up after shim disconnected" id=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.079453436Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.080991335Z" level=info msg="ignoring event" container=57ab2e19f16217e26795add975b3a8e8ce1258bdfb91299c678cc484b2d081f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.090996234Z" level=info msg="ignoring event" container=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091838634Z" level=info msg="shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091953834Z" level=warning msg="cleaning up after shim disconnected" id=94c1b82cbf317f7058f94f0ece31cd04bd262b47fdf0d217b39dcc7f31868550 namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.091968234Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.112734230Z" level=info msg="ignoring event" container=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116127330Z" level=info msg="shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116200030Z" level=warning msg="cleaning up after shim disconnected" id=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116210930Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116537230Z" level=info msg="shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116585530Z" level=warning msg="cleaning up after shim disconnected" id=cbc372543b24f69d7372512eec82596c364afb5882783a6786cf4a166018bd4e namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.116614030Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116823530Z" level=info msg="ignoring event" container=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116849530Z" level=info msg="ignoring event" container=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116946330Z" level=info msg="ignoring event" container=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.116988930Z" level=info msg="ignoring event" container=7f103559985f4dccbd0e0aa5c2d4f352a5c123dc209270bc6b25d887df21b9b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122714429Z" level=info msg="shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122848129Z" level=warning msg="cleaning up after shim disconnected" id=d0d0a1ba381c97420f4c6d57493d3e7a2db4e4886ab243abcc0b3d30eb6b138b namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.122861729Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128254128Z" level=info msg="shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128388728Z" level=warning msg="cleaning up after shim disconnected" id=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.128443128Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131550327Z" level=info msg="shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131620327Z" level=warning msg="cleaning up after shim disconnected" id=4df5049ab468cf8349004f328ee5028882fe389e2e39fa2d208cd3e887c84a26 namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.131665527Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148015624Z" level=info msg="shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148155124Z" level=warning msg="cleaning up after shim disconnected" id=905201add4b64b1760aebcd9f01092f1faa8130fd97878bee1fa202fc179a0f2 namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.148209624Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182402919Z" level=info msg="shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182503119Z" level=warning msg="cleaning up after shim disconnected" id=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.182514319Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183465819Z" level=info msg="shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183548319Z" level=warning msg="cleaning up after shim disconnected" id=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.183560019Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185575018Z" level=info msg="ignoring event" container=b9ef0d891fba8f20af7085e91c47fcffe3b545a2b0c9b700d57141c72085de86 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185722318Z" level=info msg="ignoring event" container=db6752bbe384df58e578c23aa6679f83d2773ac15394c37137f45ace3ee8f703 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185758518Z" level=info msg="ignoring event" container=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185811118Z" level=info msg="ignoring event" container=896e262d00cbb5246322e250c9e9b60995e4fd1e750f73d895fe347f107bc71f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 19 03:51:51 functional-149600 dockerd[1443]: time="2024-07-19T03:51:51.185852418Z" level=info msg="ignoring event" container=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186041918Z" level=info msg="shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186095318Z" level=warning msg="cleaning up after shim disconnected" id=73e768e55c1dcccea876608e87b91abfcf73a555851f709efb9b2cdf937f1fc0 namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.186139118Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187552418Z" level=info msg="shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187672518Z" level=warning msg="cleaning up after shim disconnected" id=04cda8c72c0105fc506f326eae40c5aea791812aa202a7f94d4c63ccb201d506 namespace=moby
Jul 19 03:51:51 functional-149600 dockerd[1449]: time="2024-07-19T03:51:51.187687018Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987746429Z" level=info msg="shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987797629Z" level=warning msg="cleaning up after shim disconnected" id=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f namespace=moby
Jul 19 03:51:55 functional-149600 dockerd[1449]: time="2024-07-19T03:51:55.987859329Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 19 03:51:55 functional-149600 dockerd[1443]: time="2024-07-19T03:51:55.988258129Z" level=info msg="ignoring event" container=2c5531ff2f2c047c3db2c4cd56544d2a0de25c2b7c8d650d7b2bc1689118b38f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 19 03:51:56 functional-149600 dockerd[1449]: time="2024-07-19T03:51:56.011512525Z" level=warning msg="cleanup warnings time=\"2024-07-19T03:51:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.013086308Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d
Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.070091705Z" level=info msg="ignoring event" container=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.070778533Z" level=info msg="shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071068387Z" level=warning msg="cleaning up after shim disconnected" id=86dafd3f7117e16384c64fdd72ca59ca9296059cc98c72095c4d09deeb357e1d namespace=moby
Jul 19 03:52:01 functional-149600 dockerd[1449]: time="2024-07-19T03:52:01.071124597Z" level=info msg="cleaning up dead shim" namespace=moby
Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.147257850Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148746827Z" level=info msg="Daemon shutdown complete"
Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.148999274Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Jul 19 03:52:01 functional-149600 dockerd[1443]: time="2024-07-19T03:52:01.149087991Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Deactivated successfully.
Jul 19 03:52:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 03:52:02 functional-149600 systemd[1]: docker.service: Consumed 5.480s CPU time.
Jul 19 03:52:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 03:52:02 functional-149600 dockerd[4315]: time="2024-07-19T03:52:02.207309394Z" level=info msg="Starting up"
Jul 19 03:53:02 functional-149600 dockerd[4315]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 03:53:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 03:53:02 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
Jul 19 03:53:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 03:53:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 03:53:02 functional-149600 dockerd[4517]: time="2024-07-19T03:53:02.427497928Z" level=info msg="Starting up"
Jul 19 03:54:02 functional-149600 dockerd[4517]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 03:54:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 03:54:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 03:54:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 03:54:02 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
Jul 19 03:54:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 03:54:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 03:54:02 functional-149600 dockerd[4794]: time="2024-07-19T03:54:02.625522087Z" level=info msg="Starting up"
Jul 19 03:55:02 functional-149600 dockerd[4794]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 03:55:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 03:55:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 03:55:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 03:55:02 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Jul 19 03:55:02 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 03:55:02 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 03:55:02 functional-149600 dockerd[5024]: time="2024-07-19T03:55:02.867963022Z" level=info msg="Starting up"
Jul 19 03:56:02 functional-149600 dockerd[5024]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 03:56:02 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 03:56:02 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 03:56:02 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 03:56:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 4.
Jul 19 03:56:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 03:56:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 03:56:03 functional-149600 dockerd[5250]: time="2024-07-19T03:56:03.114424888Z" level=info msg="Starting up"
Jul 19 03:57:03 functional-149600 dockerd[5250]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 03:57:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 03:57:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 03:57:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 03:57:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 5.
Jul 19 03:57:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 03:57:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 03:57:03 functional-149600 dockerd[5584]: time="2024-07-19T03:57:03.385021046Z" level=info msg="Starting up"
Jul 19 03:58:03 functional-149600 dockerd[5584]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 03:58:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 03:58:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 03:58:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 03:58:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 6.
Jul 19 03:58:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 03:58:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 03:58:03 functional-149600 dockerd[5803]: time="2024-07-19T03:58:03.585932078Z" level=info msg="Starting up"
Jul 19 03:59:03 functional-149600 dockerd[5803]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 03:59:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 03:59:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 03:59:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 03:59:03 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 7.
Jul 19 03:59:03 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 03:59:03 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 03:59:03 functional-149600 dockerd[6023]: time="2024-07-19T03:59:03.812709134Z" level=info msg="Starting up"
Jul 19 04:00:03 functional-149600 dockerd[6023]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:00:03 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:00:03 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:00:03 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 04:00:04 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 8.
Jul 19 04:00:04 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:00:04 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:00:04 functional-149600 dockerd[6281]: time="2024-07-19T04:00:04.125100395Z" level=info msg="Starting up"
Jul 19 04:01:04 functional-149600 dockerd[6281]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:01:04 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:01:04 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:01:04 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 04:01:04 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 9.
Jul 19 04:01:04 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:01:04 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:01:04 functional-149600 dockerd[6502]: time="2024-07-19T04:01:04.384065143Z" level=info msg="Starting up"
Jul 19 04:02:04 functional-149600 dockerd[6502]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:02:04 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:02:04 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:02:04 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 04:02:04 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 10.
Jul 19 04:02:04 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:02:04 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:02:04 functional-149600 dockerd[6727]: time="2024-07-19T04:02:04.629921832Z" level=info msg="Starting up"
Jul 19 04:03:04 functional-149600 dockerd[6727]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:03:04 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:03:04 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:03:04 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 04:03:04 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 11.
Jul 19 04:03:04 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:03:04 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:03:04 functional-149600 dockerd[6945]: time="2024-07-19T04:03:04.881594773Z" level=info msg="Starting up"
Jul 19 04:04:04 functional-149600 dockerd[6945]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:04:04 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:04:04 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:04:04 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 04:04:05 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 12.
Jul 19 04:04:05 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:04:05 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:04:05 functional-149600 dockerd[7168]: time="2024-07-19T04:04:05.123312469Z" level=info msg="Starting up"
Jul 19 04:05:05 functional-149600 dockerd[7168]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:05:05 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:05:05 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:05:05 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 04:05:05 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 13.
Jul 19 04:05:05 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:05:05 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:05:05 functional-149600 dockerd[7390]: time="2024-07-19T04:05:05.382469694Z" level=info msg="Starting up"
Jul 19 04:06:05 functional-149600 dockerd[7390]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:06:05 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:06:05 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:06:05 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 04:06:05 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 14.
Jul 19 04:06:05 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:06:05 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:06:05 functional-149600 dockerd[7633]: time="2024-07-19T04:06:05.593228245Z" level=info msg="Starting up"
Jul 19 04:07:05 functional-149600 dockerd[7633]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:07:05 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:07:05 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:07:05 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 04:07:05 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 15.
Jul 19 04:07:05 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:07:05 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:07:05 functional-149600 dockerd[7873]: time="2024-07-19T04:07:05.880412514Z" level=info msg="Starting up"
Jul 19 04:08:05 functional-149600 dockerd[7873]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:08:05 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:08:05 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:08:05 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 04:08:06 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 16.
Jul 19 04:08:06 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:08:06 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:08:06 functional-149600 dockerd[8117]: time="2024-07-19T04:08:06.127986862Z" level=info msg="Starting up"
Jul 19 04:09:06 functional-149600 dockerd[8117]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:09:06 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:09:06 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:09:06 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 04:09:06 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 17.
Jul 19 04:09:06 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:09:06 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:09:06 functional-149600 dockerd[8352]: time="2024-07-19T04:09:06.371958374Z" level=info msg="Starting up"
Jul 19 04:10:06 functional-149600 dockerd[8352]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:10:06 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:10:06 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:10:06 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 04:10:06 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 18.
Jul 19 04:10:06 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:10:06 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:10:06 functional-149600 dockerd[8667]: time="2024-07-19T04:10:06.620432494Z" level=info msg="Starting up"
Jul 19 04:11:06 functional-149600 dockerd[8667]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:11:06 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:11:06 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:11:06 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 04:11:06 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 19.
Jul 19 04:11:06 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:11:06 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:11:06 functional-149600 dockerd[8889]: time="2024-07-19T04:11:06.842404443Z" level=info msg="Starting up"
Jul 19 04:12:06 functional-149600 dockerd[8889]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:12:06 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:12:06 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:12:06 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 04:12:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 20.
Jul 19 04:12:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:12:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:12:07 functional-149600 dockerd[9109]: time="2024-07-19T04:12:07.102473619Z" level=info msg="Starting up"
Jul 19 04:13:07 functional-149600 dockerd[9109]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:13:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:13:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:13:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 04:13:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 21.
Jul 19 04:13:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:13:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:13:07 functional-149600 dockerd[9440]: time="2024-07-19T04:13:07.376165478Z" level=info msg="Starting up"
Jul 19 04:14:07 functional-149600 dockerd[9440]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:14:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:14:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:14:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 04:14:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 22.
Jul 19 04:14:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:14:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:14:07 functional-149600 dockerd[9662]: time="2024-07-19T04:14:07.590302364Z" level=info msg="Starting up"
Jul 19 04:15:07 functional-149600 dockerd[9662]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:15:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:15:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:15:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 04:15:07 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 23.
Jul 19 04:15:07 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:15:07 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:15:07 functional-149600 dockerd[9879]: time="2024-07-19T04:15:07.829795571Z" level=info msg="Starting up"
Jul 19 04:16:07 functional-149600 dockerd[9879]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:16:07 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:16:07 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:16:07 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 04:16:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 24.
Jul 19 04:16:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:16:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:16:08 functional-149600 dockerd[10215]: time="2024-07-19T04:16:08.121334668Z" level=info msg="Starting up"
Jul 19 04:17:08 functional-149600 dockerd[10215]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:17:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:17:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:17:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 04:17:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 25.
Jul 19 04:17:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:17:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:17:08 functional-149600 dockerd[10435]: time="2024-07-19T04:17:08.312026488Z" level=info msg="Starting up"
Jul 19 04:18:08 functional-149600 dockerd[10435]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:18:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:18:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:18:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 04:18:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 26.
Jul 19 04:18:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:18:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:18:08 functional-149600 dockerd[10658]: time="2024-07-19T04:18:08.567478720Z" level=info msg="Starting up"
Jul 19 04:19:08 functional-149600 dockerd[10658]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:19:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:19:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:19:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.
Jul 19 04:19:08 functional-149600 systemd[1]: docker.service: Scheduled restart job, restart counter is at 27.
Jul 19 04:19:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:19:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:19:08 functional-149600 dockerd[11028]: time="2024-07-19T04:19:08.881713903Z" level=info msg="Starting up"
Jul 19 04:19:41 functional-149600 dockerd[11028]: time="2024-07-19T04:19:41.104825080Z" level=info msg="Processing signal 'terminated'"
Jul 19 04:20:08 functional-149600 dockerd[11028]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:20:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:20:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:20:08 functional-149600 systemd[1]: Stopped Docker Application Container Engine.
Jul 19 04:20:08 functional-149600 systemd[1]: Starting Docker Application Container Engine...
Jul 19 04:20:08 functional-149600 dockerd[11475]: time="2024-07-19T04:20:08.959849556Z" level=info msg="Starting up"
Jul 19 04:21:08 functional-149600 dockerd[11475]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
Jul 19 04:21:08 functional-149600 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jul 19 04:21:08 functional-149600 systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 19 04:21:08 functional-149600 systemd[1]: Failed to start Docker Application Container Engine.

                                                
                                                
-- /stdout --
W0719 04:21:09.089413   11908 out.go:239] * 
W0719 04:21:09.091413   11908 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0719 04:21:09.095524   11908 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsCmd (94.70s)

                                                
                                    
x
+
TestFunctional/parallel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel
functional_test.go:168: Unable to run more tests (deadline exceeded)
--- FAIL: TestFunctional/parallel (0.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (70.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062500 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062500 -- exec busybox-fc5497c4f-drzm5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062500 -- exec busybox-fc5497c4f-drzm5 -- sh -c "ping -c 1 172.28.160.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-062500 -- exec busybox-fc5497c4f-drzm5 -- sh -c "ping -c 1 172.28.160.1": exit status 1 (10.5054974s)

                                                
                                                
-- stdout --
	PING 172.28.160.1 (172.28.160.1): 56 data bytes
	
	--- 172.28.160.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:41:06.916714    6760 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.28.160.1) from pod (busybox-fc5497c4f-drzm5): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062500 -- exec busybox-fc5497c4f-njwwk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062500 -- exec busybox-fc5497c4f-njwwk -- sh -c "ping -c 1 172.28.160.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-062500 -- exec busybox-fc5497c4f-njwwk -- sh -c "ping -c 1 172.28.160.1": exit status 1 (10.5023402s)

                                                
                                                
-- stdout --
	PING 172.28.160.1 (172.28.160.1): 56 data bytes
	
	--- 172.28.160.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:41:17.967748    6820 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.28.160.1) from pod (busybox-fc5497c4f-njwwk): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062500 -- exec busybox-fc5497c4f-nkb7m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062500 -- exec busybox-fc5497c4f-nkb7m -- sh -c "ping -c 1 172.28.160.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-062500 -- exec busybox-fc5497c4f-nkb7m -- sh -c "ping -c 1 172.28.160.1": exit status 1 (10.5287536s)

                                                
                                                
-- stdout --
	PING 172.28.160.1 (172.28.160.1): 56 data bytes
	
	--- 172.28.160.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:41:28.973373    1432 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.28.160.1) from pod (busybox-fc5497c4f-nkb7m): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-062500 -n ha-062500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-062500 -n ha-062500: (12.8456595s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 logs -n 25: (9.2047861s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| cache   | delete                                                                   | minikube          | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:09 UTC | 19 Jul 24 04:09 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |                   |         |                     |                     |
	| kubectl | functional-149600 kubectl --                                             | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:12 UTC |                     |
	|         | --context functional-149600                                              |                   |                   |         |                     |                     |
	|         | get pods                                                                 |                   |                   |         |                     |                     |
	| start   | -p functional-149600                                                     | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:18 UTC |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |                   |         |                     |                     |
	|         | --wait=all                                                               |                   |                   |         |                     |                     |
	| delete  | -p functional-149600                                                     | functional-149600 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:26 UTC | 19 Jul 24 04:28 UTC |
	| start   | -p ha-062500 --wait=true                                                 | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:28 UTC | 19 Jul 24 04:40 UTC |
	|         | --memory=2200 --ha                                                       |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr                                                   |                   |                   |         |                     |                     |
	|         | --driver=hyperv                                                          |                   |                   |         |                     |                     |
	| kubectl | -p ha-062500 -- apply -f                                                 | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:40 UTC | 19 Jul 24 04:40 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml                                       |                   |                   |         |                     |                     |
	| kubectl | -p ha-062500 -- rollout status                                           | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:40 UTC | 19 Jul 24 04:40 UTC |
	|         | deployment/busybox                                                       |                   |                   |         |                     |                     |
	| kubectl | -p ha-062500 -- get pods -o                                              | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:40 UTC | 19 Jul 24 04:40 UTC |
	|         | jsonpath='{.items[*].status.podIP}'                                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-062500 -- get pods -o                                              | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:40 UTC | 19 Jul 24 04:40 UTC |
	|         | jsonpath='{.items[*].metadata.name}'                                     |                   |                   |         |                     |                     |
	| kubectl | -p ha-062500 -- exec                                                     | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:40 UTC | 19 Jul 24 04:41 UTC |
	|         | busybox-fc5497c4f-drzm5 --                                               |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io                                                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-062500 -- exec                                                     | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:41 UTC | 19 Jul 24 04:41 UTC |
	|         | busybox-fc5497c4f-njwwk --                                               |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io                                                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-062500 -- exec                                                     | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:41 UTC | 19 Jul 24 04:41 UTC |
	|         | busybox-fc5497c4f-nkb7m --                                               |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io                                                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-062500 -- exec                                                     | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:41 UTC | 19 Jul 24 04:41 UTC |
	|         | busybox-fc5497c4f-drzm5 --                                               |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default                                              |                   |                   |         |                     |                     |
	| kubectl | -p ha-062500 -- exec                                                     | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:41 UTC | 19 Jul 24 04:41 UTC |
	|         | busybox-fc5497c4f-njwwk --                                               |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default                                              |                   |                   |         |                     |                     |
	| kubectl | -p ha-062500 -- exec                                                     | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:41 UTC | 19 Jul 24 04:41 UTC |
	|         | busybox-fc5497c4f-nkb7m --                                               |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default                                              |                   |                   |         |                     |                     |
	| kubectl | -p ha-062500 -- exec                                                     | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:41 UTC | 19 Jul 24 04:41 UTC |
	|         | busybox-fc5497c4f-drzm5 -- nslookup                                      |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                     |                   |                   |         |                     |                     |
	| kubectl | -p ha-062500 -- exec                                                     | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:41 UTC | 19 Jul 24 04:41 UTC |
	|         | busybox-fc5497c4f-njwwk -- nslookup                                      |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                     |                   |                   |         |                     |                     |
	| kubectl | -p ha-062500 -- exec                                                     | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:41 UTC | 19 Jul 24 04:41 UTC |
	|         | busybox-fc5497c4f-nkb7m -- nslookup                                      |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                     |                   |                   |         |                     |                     |
	| kubectl | -p ha-062500 -- get pods -o                                              | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:41 UTC | 19 Jul 24 04:41 UTC |
	|         | jsonpath='{.items[*].metadata.name}'                                     |                   |                   |         |                     |                     |
	| kubectl | -p ha-062500 -- exec                                                     | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:41 UTC | 19 Jul 24 04:41 UTC |
	|         | busybox-fc5497c4f-drzm5                                                  |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                                                        |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk                                             |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                  |                   |                   |         |                     |                     |
	| kubectl | -p ha-062500 -- exec                                                     | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:41 UTC |                     |
	|         | busybox-fc5497c4f-drzm5 -- sh                                            |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.160.1                                                |                   |                   |         |                     |                     |
	| kubectl | -p ha-062500 -- exec                                                     | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:41 UTC | 19 Jul 24 04:41 UTC |
	|         | busybox-fc5497c4f-njwwk                                                  |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                                                        |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk                                             |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                  |                   |                   |         |                     |                     |
	| kubectl | -p ha-062500 -- exec                                                     | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:41 UTC |                     |
	|         | busybox-fc5497c4f-njwwk -- sh                                            |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.160.1                                                |                   |                   |         |                     |                     |
	| kubectl | -p ha-062500 -- exec                                                     | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:41 UTC | 19 Jul 24 04:41 UTC |
	|         | busybox-fc5497c4f-nkb7m                                                  |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                                                        |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk                                             |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                  |                   |                   |         |                     |                     |
	| kubectl | -p ha-062500 -- exec                                                     | ha-062500         | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:41 UTC |                     |
	|         | busybox-fc5497c4f-nkb7m -- sh                                            |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.160.1                                                |                   |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 04:28:29
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 04:28:29.297538    8304 out.go:291] Setting OutFile to fd 732 ...
	I0719 04:28:29.298520    8304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:28:29.298520    8304 out.go:304] Setting ErrFile to fd 896...
	I0719 04:28:29.298520    8304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:28:29.320671    8304 out.go:298] Setting JSON to false
	I0719 04:28:29.323662    8304 start.go:129] hostinfo: {"hostname":"minikube6","uptime":22335,"bootTime":1721340973,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0719 04:28:29.323662    8304 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 04:28:29.332562    8304 out.go:177] * [ha-062500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 04:28:29.337089    8304 notify.go:220] Checking for updates...
	I0719 04:28:29.338037    8304 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 04:28:29.340092    8304 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 04:28:29.344031    8304 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0719 04:28:29.346479    8304 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 04:28:29.348900    8304 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:28:29.352525    8304 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:28:34.813257    8304 out.go:177] * Using the hyperv driver based on user configuration
	I0719 04:28:34.818688    8304 start.go:297] selected driver: hyperv
	I0719 04:28:34.818721    8304 start.go:901] validating driver "hyperv" against <nil>
	I0719 04:28:34.818815    8304 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 04:28:34.865459    8304 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 04:28:34.867776    8304 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:28:34.867819    8304 cni.go:84] Creating CNI manager for ""
	I0719 04:28:34.867819    8304 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0719 04:28:34.867819    8304 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 04:28:34.867819    8304 start.go:340] cluster config:
	{Name:ha-062500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-062500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:28:34.868551    8304 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 04:28:34.873638    8304 out.go:177] * Starting "ha-062500" primary control-plane node in "ha-062500" cluster
	I0719 04:28:34.876335    8304 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 04:28:34.876335    8304 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 04:28:34.876335    8304 cache.go:56] Caching tarball of preloaded images
	I0719 04:28:34.876335    8304 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 04:28:34.877211    8304 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 04:28:34.877771    8304 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\config.json ...
	I0719 04:28:34.878040    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\config.json: {Name:mk584e85affd6cb4e038183a910b65d81c19636d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:28:34.878814    8304 start.go:360] acquireMachinesLock for ha-062500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 04:28:34.878814    8304 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-062500"
	I0719 04:28:34.879818    8304 start.go:93] Provisioning new machine with config: &{Name:ha-062500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-062500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 04:28:34.879818    8304 start.go:125] createHost starting for "" (driver="hyperv")
	I0719 04:28:34.881474    8304 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 04:28:34.882769    8304 start.go:159] libmachine.API.Create for "ha-062500" (driver="hyperv")
	I0719 04:28:34.882769    8304 client.go:168] LocalClient.Create starting
	I0719 04:28:34.883046    8304 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0719 04:28:34.883046    8304 main.go:141] libmachine: Decoding PEM data...
	I0719 04:28:34.883046    8304 main.go:141] libmachine: Parsing certificate...
	I0719 04:28:34.883046    8304 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0719 04:28:34.884172    8304 main.go:141] libmachine: Decoding PEM data...
	I0719 04:28:34.884172    8304 main.go:141] libmachine: Parsing certificate...
	I0719 04:28:34.884392    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0719 04:28:36.977754    8304 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0719 04:28:36.977842    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:28:36.977933    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0719 04:28:38.714552    8304 main.go:141] libmachine: [stdout =====>] : False
	
	I0719 04:28:38.714552    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:28:38.715133    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0719 04:28:40.201910    8304 main.go:141] libmachine: [stdout =====>] : True
	
	I0719 04:28:40.201910    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:28:40.201910    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0719 04:28:43.823071    8304 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0719 04:28:43.823301    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:28:43.825803    8304 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 04:28:44.300316    8304 main.go:141] libmachine: Creating SSH key...
	I0719 04:28:44.495605    8304 main.go:141] libmachine: Creating VM...
	I0719 04:28:44.495605    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0719 04:28:47.306341    8304 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0719 04:28:47.306532    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:28:47.306532    8304 main.go:141] libmachine: Using switch "Default Switch"
	I0719 04:28:47.306532    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0719 04:28:49.080663    8304 main.go:141] libmachine: [stdout =====>] : True
	
	I0719 04:28:49.080663    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:28:49.080663    8304 main.go:141] libmachine: Creating VHD
	I0719 04:28:49.080663    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0719 04:28:52.898187    8304 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 4A0B9D16-61CF-42E2-A324-34273632452A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0719 04:28:52.898187    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:28:52.898187    8304 main.go:141] libmachine: Writing magic tar header
	I0719 04:28:52.898187    8304 main.go:141] libmachine: Writing SSH key tar header
	I0719 04:28:52.907540    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0719 04:28:56.142664    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:28:56.142851    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:28:56.143009    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\disk.vhd' -SizeBytes 20000MB
	I0719 04:28:58.749878    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:28:58.749878    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:28:58.750611    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-062500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0719 04:29:03.005022    8304 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-062500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0719 04:29:03.005495    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:03.005495    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-062500 -DynamicMemoryEnabled $false
	I0719 04:29:05.298546    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:29:05.298546    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:05.298759    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-062500 -Count 2
	I0719 04:29:07.506986    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:29:07.508017    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:07.508017    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-062500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\boot2docker.iso'
	I0719 04:29:10.131526    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:29:10.131526    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:10.132092    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-062500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\disk.vhd'
	I0719 04:29:12.818947    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:29:12.818947    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:12.819719    8304 main.go:141] libmachine: Starting VM...
	I0719 04:29:12.819719    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-062500
	I0719 04:29:15.994865    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:29:15.994865    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:15.994865    8304 main.go:141] libmachine: Waiting for host to start...
	I0719 04:29:15.994865    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:29:18.358194    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:29:18.358194    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:18.358194    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:29:20.954588    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:29:20.955410    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:21.965774    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:29:24.220460    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:29:24.220460    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:24.221499    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:29:26.756336    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:29:26.756336    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:27.761067    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:29:30.008757    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:29:30.008757    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:30.008757    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:29:32.597755    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:29:32.597755    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:33.602921    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:29:35.903497    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:29:35.903992    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:35.904114    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:29:38.494191    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:29:38.494271    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:39.509806    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:29:41.759742    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:29:41.760025    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:41.760156    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:29:44.323925    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:29:44.323925    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:44.324485    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:29:46.471343    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:29:46.472359    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:46.472442    8304 machine.go:94] provisionDockerMachine start ...
	I0719 04:29:46.472598    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:29:48.697995    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:29:48.697995    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:48.697995    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:29:51.272276    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:29:51.272712    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:51.278246    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:29:51.290034    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.168.223 22 <nil> <nil>}
	I0719 04:29:51.290304    8304 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 04:29:51.420616    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 04:29:51.420616    8304 buildroot.go:166] provisioning hostname "ha-062500"
	I0719 04:29:51.420616    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:29:53.563457    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:29:53.563457    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:53.564563    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:29:56.108767    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:29:56.108767    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:56.113915    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:29:56.114198    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.168.223 22 <nil> <nil>}
	I0719 04:29:56.114198    8304 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-062500 && echo "ha-062500" | sudo tee /etc/hostname
	I0719 04:29:56.267770    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-062500
	
	I0719 04:29:56.267770    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:29:58.461961    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:29:58.462886    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:58.462985    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:00.997514    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:00.998432    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:01.003486    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:30:01.004274    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.168.223 22 <nil> <nil>}
	I0719 04:30:01.004274    8304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-062500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-062500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-062500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:30:01.139934    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:30:01.139934    8304 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 04:30:01.139934    8304 buildroot.go:174] setting up certificates
	I0719 04:30:01.139934    8304 provision.go:84] configureAuth start
	I0719 04:30:01.139934    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:03.372137    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:03.372411    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:03.372411    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:05.959595    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:05.960469    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:05.960469    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:08.193797    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:08.193797    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:08.194157    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:10.740701    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:10.740930    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:10.741015    8304 provision.go:143] copyHostCerts
	I0719 04:30:10.741157    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0719 04:30:10.741157    8304 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0719 04:30:10.741157    8304 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0719 04:30:10.741961    8304 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 04:30:10.743403    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0719 04:30:10.743747    8304 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0719 04:30:10.743852    8304 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0719 04:30:10.744275    8304 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 04:30:10.745568    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0719 04:30:10.745797    8304 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0719 04:30:10.746033    8304 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0719 04:30:10.746180    8304 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 04:30:10.747438    8304 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-062500 san=[127.0.0.1 172.28.168.223 ha-062500 localhost minikube]
	I0719 04:30:11.020383    8304 provision.go:177] copyRemoteCerts
	I0719 04:30:11.031333    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:30:11.031333    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:13.313148    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:13.313418    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:13.313418    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:15.840742    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:15.840742    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:15.841965    8304 sshutil.go:53] new ssh client: &{IP:172.28.168.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\id_rsa Username:docker}
	I0719 04:30:15.938841    8304 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9074512s)
	I0719 04:30:15.939058    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0719 04:30:15.939090    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 04:30:15.984654    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0719 04:30:15.985129    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0719 04:30:16.029208    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0719 04:30:16.029764    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 04:30:16.077116    8304 provision.go:87] duration metric: took 14.9370097s to configureAuth
	I0719 04:30:16.077266    8304 buildroot.go:189] setting minikube options for container-runtime
	I0719 04:30:16.077586    8304 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:30:16.077586    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:18.238006    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:18.238006    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:18.238130    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:20.903519    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:20.903519    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:20.910129    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:30:20.910357    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.168.223 22 <nil> <nil>}
	I0719 04:30:20.910913    8304 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 04:30:21.029131    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 04:30:21.029131    8304 buildroot.go:70] root file system type: tmpfs
	I0719 04:30:21.029244    8304 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 04:30:21.029244    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:23.195122    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:23.195497    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:23.195545    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:25.764832    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:25.764832    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:25.774658    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:30:25.774658    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.168.223 22 <nil> <nil>}
	I0719 04:30:25.774658    8304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 04:30:25.925383    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 04:30:25.925383    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:28.120342    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:28.120342    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:28.120600    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:30.741255    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:30.741362    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:30.746354    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:30:30.747085    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.168.223 22 <nil> <nil>}
	I0719 04:30:30.747085    8304 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 04:30:32.998547    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 04:30:32.998547    8304 machine.go:97] duration metric: took 46.52557s to provisionDockerMachine
	I0719 04:30:32.998547    8304 client.go:171] duration metric: took 1m58.1144198s to LocalClient.Create
	I0719 04:30:32.998547    8304 start.go:167] duration metric: took 1m58.1144198s to libmachine.API.Create "ha-062500"
	I0719 04:30:32.998547    8304 start.go:293] postStartSetup for "ha-062500" (driver="hyperv")
	I0719 04:30:32.998547    8304 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:30:33.010285    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:30:33.010803    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:35.269165    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:35.269165    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:35.269165    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:37.927523    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:37.927523    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:37.928712    8304 sshutil.go:53] new ssh client: &{IP:172.28.168.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\id_rsa Username:docker}
	I0719 04:30:38.031913    8304 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0208885s)
	I0719 04:30:38.046729    8304 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:30:38.054279    8304 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 04:30:38.054426    8304 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 04:30:38.054846    8304 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 04:30:38.055840    8304 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
	I0719 04:30:38.055913    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /etc/ssl/certs/96042.pem
	I0719 04:30:38.068301    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 04:30:38.089278    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
	I0719 04:30:38.134296    8304 start.go:296] duration metric: took 5.1356894s for postStartSetup
	I0719 04:30:38.138222    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:40.288034    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:40.288034    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:40.288034    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:42.846175    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:42.846384    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:42.846568    8304 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\config.json ...
	I0719 04:30:42.850450    8304 start.go:128] duration metric: took 2m7.9691604s to createHost
	I0719 04:30:42.850547    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:44.989708    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:44.989708    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:44.990732    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:47.556391    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:47.556616    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:47.561657    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:30:47.562461    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.168.223 22 <nil> <nil>}
	I0719 04:30:47.562461    8304 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 04:30:47.681705    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721363447.695597945
	
	I0719 04:30:47.681781    8304 fix.go:216] guest clock: 1721363447.695597945
	I0719 04:30:47.681781    8304 fix.go:229] Guest: 2024-07-19 04:30:47.695597945 +0000 UTC Remote: 2024-07-19 04:30:42.8505478 +0000 UTC m=+133.711257901 (delta=4.845050145s)
	I0719 04:30:47.681857    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:49.821604    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:49.821718    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:49.821859    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:52.363260    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:52.363260    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:52.370009    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:30:52.370490    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.168.223 22 <nil> <nil>}
	I0719 04:30:52.370490    8304 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721363447
	I0719 04:30:52.520344    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 04:30:47 UTC 2024
	
	I0719 04:30:52.520344    8304 fix.go:236] clock set: Fri Jul 19 04:30:47 UTC 2024
	 (err=<nil>)
	I0719 04:30:52.520344    8304 start.go:83] releasing machines lock for "ha-062500", held for 2m17.6399464s
	I0719 04:30:52.520344    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:54.698281    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:54.698281    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:54.699018    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:57.305432    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:57.305490    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:57.309272    8304 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 04:30:57.309338    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:57.319983    8304 ssh_runner.go:195] Run: cat /version.json
	I0719 04:30:57.319983    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:59.580077    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:59.580077    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:59.580077    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:59.580077    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:59.580077    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:59.580077    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:31:02.286703    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:31:02.286703    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:31:02.286703    8304 sshutil.go:53] new ssh client: &{IP:172.28.168.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\id_rsa Username:docker}
	I0719 04:31:02.311907    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:31:02.311954    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:31:02.311954    8304 sshutil.go:53] new ssh client: &{IP:172.28.168.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\id_rsa Username:docker}
	I0719 04:31:02.374052    8304 ssh_runner.go:235] Completed: cat /version.json: (5.0538281s)
	I0719 04:31:02.386353    8304 ssh_runner.go:195] Run: systemctl --version
	I0719 04:31:02.391432    8304 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0820357s)
	W0719 04:31:02.391552    8304 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 04:31:02.421338    8304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 04:31:02.432230    8304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 04:31:02.445310    8304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:31:02.474831    8304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 04:31:02.474831    8304 start.go:495] detecting cgroup driver to use...
	I0719 04:31:02.475116    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:31:02.522321    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0719 04:31:02.533432    8304 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 04:31:02.533432    8304 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 04:31:02.564483    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 04:31:02.586487    8304 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 04:31:02.598483    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 04:31:02.629483    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 04:31:02.660484    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 04:31:02.691686    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 04:31:02.726654    8304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:31:02.762769    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 04:31:02.795795    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 04:31:02.827597    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 04:31:02.858771    8304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:31:02.899899    8304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:31:02.928379    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:31:03.160012    8304 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 04:31:03.194100    8304 start.go:495] detecting cgroup driver to use...
	I0719 04:31:03.206635    8304 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 04:31:03.242425    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:31:03.271415    8304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:31:03.324023    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:31:03.371584    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 04:31:03.417423    8304 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 04:31:03.478913    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 04:31:03.502429    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:31:03.550690    8304 ssh_runner.go:195] Run: which cri-dockerd
	I0719 04:31:03.568953    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 04:31:03.586981    8304 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 04:31:03.630318    8304 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 04:31:03.824136    8304 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 04:31:04.019515    8304 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 04:31:04.019823    8304 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 04:31:04.066066    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:31:04.264017    8304 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 04:31:06.859168    8304 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5951209s)
	I0719 04:31:06.870601    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 04:31:06.908406    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 04:31:06.947108    8304 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 04:31:07.157629    8304 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 04:31:07.365154    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:31:07.566378    8304 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 04:31:07.610799    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 04:31:07.645289    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:31:07.845587    8304 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 04:31:07.955569    8304 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 04:31:07.968546    8304 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 04:31:07.977901    8304 start.go:563] Will wait 60s for crictl version
	I0719 04:31:07.989861    8304 ssh_runner.go:195] Run: which crictl
	I0719 04:31:08.005763    8304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 04:31:08.068961    8304 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 04:31:08.079305    8304 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 04:31:08.123698    8304 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 04:31:08.158974    8304 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0719 04:31:08.159169    8304 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0719 04:31:08.162781    8304 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0719 04:31:08.162781    8304 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0719 04:31:08.162781    8304 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0719 04:31:08.162781    8304 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:e9:18 Flags:up|broadcast|multicast|running}
	I0719 04:31:08.165817    8304 ip.go:210] interface addr: fe80::1dc5:162d:cec2:b9bd/64
	I0719 04:31:08.165817    8304 ip.go:210] interface addr: 172.28.160.1/20
	I0719 04:31:08.176818    8304 ssh_runner.go:195] Run: grep 172.28.160.1	host.minikube.internal$ /etc/hosts
	I0719 04:31:08.183687    8304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:31:08.218421    8304 kubeadm.go:883] updating cluster {Name:ha-062500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:ha-062500 Namespace:default APIServerHAVIP:172.28.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.168.223 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 04:31:08.218421    8304 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 04:31:08.226923    8304 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 04:31:08.251264    8304 docker.go:685] Got preloaded images: 
	I0719 04:31:08.251264    8304 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0719 04:31:08.262367    8304 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 04:31:08.293963    8304 ssh_runner.go:195] Run: which lz4
	I0719 04:31:08.300137    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0719 04:31:08.321156    8304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 04:31:08.328114    8304 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 04:31:08.329163    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0719 04:31:10.104144    8304 docker.go:649] duration metric: took 1.7934342s to copy over tarball
	I0719 04:31:10.120630    8304 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 04:31:18.664422    8304 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5436942s)
	I0719 04:31:18.664545    8304 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 04:31:18.730961    8304 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 04:31:18.748676    8304 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0719 04:31:18.793529    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:31:19.002759    8304 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 04:31:22.386999    8304 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3840392s)
	I0719 04:31:22.397417    8304 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 04:31:22.432244    8304 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 04:31:22.432244    8304 cache_images.go:84] Images are preloaded, skipping loading
	I0719 04:31:22.432244    8304 kubeadm.go:934] updating node { 172.28.168.223 8443 v1.30.3 docker true true} ...
	I0719 04:31:22.432244    8304 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-062500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.168.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-062500 Namespace:default APIServerHAVIP:172.28.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 04:31:22.443547    8304 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 04:31:22.480066    8304 cni.go:84] Creating CNI manager for ""
	I0719 04:31:22.480066    8304 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 04:31:22.480066    8304 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 04:31:22.480190    8304 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.168.223 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-062500 NodeName:ha-062500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.168.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.168.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 04:31:22.480387    8304 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.168.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-062500"
	  kubeletExtraArgs:
	    node-ip: 172.28.168.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.168.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 04:31:22.480498    8304 kube-vip.go:115] generating kube-vip config ...
	I0719 04:31:22.491383    8304 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 04:31:22.526409    8304 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 04:31:22.527283    8304 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.175.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0719 04:31:22.539261    8304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 04:31:22.555525    8304 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 04:31:22.569006    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0719 04:31:22.587976    8304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0719 04:31:22.617995    8304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 04:31:22.649751    8304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0719 04:31:22.679804    8304 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0719 04:31:22.725152    8304 ssh_runner.go:195] Run: grep 172.28.175.254	control-plane.minikube.internal$ /etc/hosts
	I0719 04:31:22.731957    8304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.175.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:31:22.765887    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:31:22.958363    8304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:31:22.988918    8304 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500 for IP: 172.28.168.223
	I0719 04:31:22.988918    8304 certs.go:194] generating shared ca certs ...
	I0719 04:31:22.988918    8304 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:23.005617    8304 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0719 04:31:23.022699    8304 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0719 04:31:23.023018    8304 certs.go:256] generating profile certs ...
	I0719 04:31:23.023283    8304 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\client.key
	I0719 04:31:23.023817    8304 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\client.crt with IP's: []
	I0719 04:31:23.133804    8304 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\client.crt ...
	I0719 04:31:23.133804    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\client.crt: {Name:mk2fd3422fb14cd0850d18aa8c21329d8e241619 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:23.135311    8304 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\client.key ...
	I0719 04:31:23.135311    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\client.key: {Name:mkc7c7bf529d2be753ba98c145eb5a351142671b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:23.136427    8304 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.7e2b9445
	I0719 04:31:23.137063    8304 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.7e2b9445 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.168.223 172.28.175.254]
	I0719 04:31:23.399307    8304 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.7e2b9445 ...
	I0719 04:31:23.399307    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.7e2b9445: {Name:mk95e206c16a42dba6cfde7871ec451eb4f8d55b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:23.400093    8304 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.7e2b9445 ...
	I0719 04:31:23.401170    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.7e2b9445: {Name:mk2e2946de104f605963884e075902631228a152 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:23.401398    8304 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.7e2b9445 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt
	I0719 04:31:23.413523    8304 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.7e2b9445 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key
	I0719 04:31:23.415651    8304 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key
	I0719 04:31:23.415651    8304 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.crt with IP's: []
	I0719 04:31:23.655329    8304 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.crt ...
	I0719 04:31:23.656278    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.crt: {Name:mkcde0934063d4bb3c8946462ef361cd0b8a0a56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:23.656660    8304 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key ...
	I0719 04:31:23.656660    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key: {Name:mke80e57cf095435127648d70814bc8a36740f5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:23.657895    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 04:31:23.658896    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0719 04:31:23.659073    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 04:31:23.659216    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 04:31:23.659406    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 04:31:23.659572    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 04:31:23.659730    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 04:31:23.668911    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 04:31:23.669963    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem (1338 bytes)
	W0719 04:31:23.678074    8304 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604_empty.pem, impossibly tiny 0 bytes
	I0719 04:31:23.678161    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0719 04:31:23.678161    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0719 04:31:23.678935    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0719 04:31:23.679235    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0719 04:31:23.679945    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem (1708 bytes)
	I0719 04:31:23.680191    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /usr/share/ca-certificates/96042.pem
	I0719 04:31:23.680473    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:31:23.680473    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem -> /usr/share/ca-certificates/9604.pem
	I0719 04:31:23.681988    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 04:31:23.734526    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 04:31:23.777427    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 04:31:23.827330    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 04:31:23.869955    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 04:31:23.917444    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 04:31:23.962458    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 04:31:24.010772    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 04:31:24.062116    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /usr/share/ca-certificates/96042.pem (1708 bytes)
	I0719 04:31:24.109468    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 04:31:24.158262    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem --> /usr/share/ca-certificates/9604.pem (1338 bytes)
	I0719 04:31:24.205554    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 04:31:24.252567    8304 ssh_runner.go:195] Run: openssl version
	I0719 04:31:24.273960    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 04:31:24.303988    8304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:31:24.311002    8304 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:31:24.321015    8304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:31:24.342547    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 04:31:24.373910    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9604.pem && ln -fs /usr/share/ca-certificates/9604.pem /etc/ssl/certs/9604.pem"
	I0719 04:31:24.403746    8304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9604.pem
	I0719 04:31:24.410732    8304 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:46 /usr/share/ca-certificates/9604.pem
	I0719 04:31:24.421945    8304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9604.pem
	I0719 04:31:24.443979    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9604.pem /etc/ssl/certs/51391683.0"
	I0719 04:31:24.475015    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96042.pem && ln -fs /usr/share/ca-certificates/96042.pem /etc/ssl/certs/96042.pem"
	I0719 04:31:24.505745    8304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96042.pem
	I0719 04:31:24.512338    8304 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:46 /usr/share/ca-certificates/96042.pem
	I0719 04:31:24.525325    8304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96042.pem
	I0719 04:31:24.544436    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96042.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 04:31:24.579894    8304 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 04:31:24.587452    8304 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 04:31:24.587932    8304 kubeadm.go:392] StartCluster: {Name:ha-062500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clu
sterName:ha-062500 Namespace:default APIServerHAVIP:172.28.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.168.223 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:31:24.599080    8304 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 04:31:24.631855    8304 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 04:31:24.658310    8304 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 04:31:24.693636    8304 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 04:31:24.710350    8304 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 04:31:24.710350    8304 kubeadm.go:157] found existing configuration files:
	
	I0719 04:31:24.721446    8304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 04:31:24.737559    8304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 04:31:24.749042    8304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 04:31:24.779855    8304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 04:31:24.797933    8304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 04:31:24.810533    8304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 04:31:24.840759    8304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 04:31:24.858678    8304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 04:31:24.872995    8304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 04:31:24.904317    8304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 04:31:24.927117    8304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 04:31:24.946611    8304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 04:31:24.970150    8304 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 04:31:25.492108    8304 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 04:31:41.407301    8304 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 04:31:41.407431    8304 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 04:31:41.407643    8304 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 04:31:41.407956    8304 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 04:31:41.408310    8304 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 04:31:41.408456    8304 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 04:31:41.411239    8304 out.go:204]   - Generating certificates and keys ...
	I0719 04:31:41.411571    8304 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 04:31:41.411702    8304 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 04:31:41.411702    8304 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 04:31:41.411702    8304 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 04:31:41.412249    8304 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 04:31:41.412406    8304 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 04:31:41.412554    8304 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 04:31:41.412657    8304 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-062500 localhost] and IPs [172.28.168.223 127.0.0.1 ::1]
	I0719 04:31:41.412657    8304 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 04:31:41.413246    8304 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-062500 localhost] and IPs [172.28.168.223 127.0.0.1 ::1]
	I0719 04:31:41.413577    8304 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 04:31:41.413777    8304 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 04:31:41.413962    8304 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 04:31:41.414087    8304 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 04:31:41.414087    8304 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 04:31:41.414087    8304 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 04:31:41.414087    8304 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 04:31:41.414630    8304 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 04:31:41.414929    8304 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 04:31:41.415094    8304 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 04:31:41.415094    8304 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 04:31:41.418086    8304 out.go:204]   - Booting up control plane ...
	I0719 04:31:41.418231    8304 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 04:31:41.418231    8304 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 04:31:41.418231    8304 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 04:31:41.418918    8304 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 04:31:41.418918    8304 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 04:31:41.418918    8304 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 04:31:41.419497    8304 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 04:31:41.419755    8304 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 04:31:41.419755    8304 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002479396s
	I0719 04:31:41.419755    8304 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 04:31:41.419755    8304 kubeadm.go:310] [api-check] The API server is healthy after 8.982301087s
	I0719 04:31:41.420464    8304 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 04:31:41.420573    8304 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 04:31:41.420573    8304 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 04:31:41.421245    8304 kubeadm.go:310] [mark-control-plane] Marking the node ha-062500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 04:31:41.421459    8304 kubeadm.go:310] [bootstrap-token] Using token: obov36.teetl7w3d3ffgrf9
	I0719 04:31:41.428385    8304 out.go:204]   - Configuring RBAC rules ...
	I0719 04:31:41.428385    8304 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 04:31:41.428385    8304 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 04:31:41.429172    8304 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 04:31:41.429565    8304 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 04:31:41.429808    8304 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 04:31:41.430048    8304 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 04:31:41.430048    8304 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 04:31:41.430048    8304 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 04:31:41.430048    8304 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 04:31:41.430627    8304 kubeadm.go:310] 
	I0719 04:31:41.430959    8304 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 04:31:41.430990    8304 kubeadm.go:310] 
	I0719 04:31:41.431170    8304 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 04:31:41.431251    8304 kubeadm.go:310] 
	I0719 04:31:41.431351    8304 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 04:31:41.431439    8304 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 04:31:41.431513    8304 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 04:31:41.431513    8304 kubeadm.go:310] 
	I0719 04:31:41.431513    8304 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 04:31:41.431513    8304 kubeadm.go:310] 
	I0719 04:31:41.431513    8304 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 04:31:41.431513    8304 kubeadm.go:310] 
	I0719 04:31:41.431513    8304 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 04:31:41.431513    8304 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 04:31:41.432242    8304 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 04:31:41.432273    8304 kubeadm.go:310] 
	I0719 04:31:41.432446    8304 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 04:31:41.432605    8304 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 04:31:41.432605    8304 kubeadm.go:310] 
	I0719 04:31:41.432795    8304 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token obov36.teetl7w3d3ffgrf9 \
	I0719 04:31:41.433032    8304 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 \
	I0719 04:31:41.433032    8304 kubeadm.go:310] 	--control-plane 
	I0719 04:31:41.433032    8304 kubeadm.go:310] 
	I0719 04:31:41.433388    8304 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 04:31:41.433388    8304 kubeadm.go:310] 
	I0719 04:31:41.433569    8304 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token obov36.teetl7w3d3ffgrf9 \
	I0719 04:31:41.433569    8304 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 
	I0719 04:31:41.433956    8304 cni.go:84] Creating CNI manager for ""
	I0719 04:31:41.433999    8304 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 04:31:41.444441    8304 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0719 04:31:41.458795    8304 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0719 04:31:41.466212    8304 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0719 04:31:41.466212    8304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0719 04:31:41.510042    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0719 04:31:42.096873    8304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 04:31:42.110585    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:42.112556    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-062500 minikube.k8s.io/updated_at=2024_07_19T04_31_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-062500 minikube.k8s.io/primary=true
	I0719 04:31:42.122560    8304 ops.go:34] apiserver oom_adj: -16
	I0719 04:31:42.376213    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:42.888962    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:43.388066    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:43.876340    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:44.383641    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:44.885698    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:45.384539    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:45.886261    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:46.388409    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:46.889939    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:47.389444    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:47.889000    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:48.390560    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:48.876684    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:49.377749    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:49.885613    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:50.386581    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:50.886094    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:51.390496    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:51.879609    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:52.377332    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:52.880696    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:53.387270    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:53.880228    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:54.052466    8304 kubeadm.go:1113] duration metric: took 11.9554556s to wait for elevateKubeSystemPrivileges
	I0719 04:31:54.053543    8304 kubeadm.go:394] duration metric: took 29.4652724s to StartCluster
	I0719 04:31:54.053543    8304 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:54.053543    8304 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 04:31:54.055056    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:54.056426    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 04:31:54.056426    8304 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.28.168.223 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 04:31:54.056426    8304 start.go:241] waiting for startup goroutines ...
	I0719 04:31:54.056426    8304 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 04:31:54.056426    8304 addons.go:69] Setting default-storageclass=true in profile "ha-062500"
	I0719 04:31:54.056426    8304 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-062500"
	I0719 04:31:54.056426    8304 addons.go:69] Setting storage-provisioner=true in profile "ha-062500"
	I0719 04:31:54.056957    8304 addons.go:234] Setting addon storage-provisioner=true in "ha-062500"
	I0719 04:31:54.057199    8304 host.go:66] Checking if "ha-062500" exists ...
	I0719 04:31:54.057274    8304 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:31:54.058857    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:31:54.059645    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:31:54.281265    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 04:31:54.630390    8304 start.go:971] {"host.minikube.internal": 172.28.160.1} host record injected into CoreDNS's ConfigMap
	I0719 04:31:56.592369    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:31:56.593278    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:31:56.594077    8304 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 04:31:56.594710    8304 kapi.go:59] client config for ha-062500: &rest.Config{Host:"https://172.28.175.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-062500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-062500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 04:31:56.596607    8304 cert_rotation.go:137] Starting client certificate rotation controller
	I0719 04:31:56.597001    8304 addons.go:234] Setting addon default-storageclass=true in "ha-062500"
	I0719 04:31:56.597212    8304 host.go:66] Checking if "ha-062500" exists ...
	I0719 04:31:56.597891    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:31:56.637973    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:31:56.638352    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:31:56.641754    8304 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 04:31:56.645618    8304 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 04:31:56.645618    8304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 04:31:56.645696    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:31:58.946591    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:31:58.946591    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:31:58.946591    8304 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 04:31:58.946591    8304 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 04:31:58.946591    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:31:59.010398    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:31:59.010578    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:31:59.010675    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:32:01.339884    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:32:01.340131    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:01.340131    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:32:01.817521    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:32:01.817699    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:01.818392    8304 sshutil.go:53] new ssh client: &{IP:172.28.168.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\id_rsa Username:docker}
	I0719 04:32:01.980812    8304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 04:32:04.034807    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:32:04.034807    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:04.034807    8304 sshutil.go:53] new ssh client: &{IP:172.28.168.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\id_rsa Username:docker}
	I0719 04:32:04.170047    8304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 04:32:04.326503    8304 round_trippers.go:463] GET https://172.28.175.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0719 04:32:04.326549    8304 round_trippers.go:469] Request Headers:
	I0719 04:32:04.326594    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:32:04.326594    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:32:04.339165    8304 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0719 04:32:04.340467    8304 round_trippers.go:463] PUT https://172.28.175.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0719 04:32:04.340549    8304 round_trippers.go:469] Request Headers:
	I0719 04:32:04.340549    8304 round_trippers.go:473]     Content-Type: application/json
	I0719 04:32:04.340549    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:32:04.340549    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:32:04.350978    8304 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0719 04:32:04.355917    8304 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0719 04:32:04.360570    8304 addons.go:510] duration metric: took 10.3040256s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0719 04:32:04.360570    8304 start.go:246] waiting for cluster config update ...
	I0719 04:32:04.360570    8304 start.go:255] writing updated cluster config ...
	I0719 04:32:04.366652    8304 out.go:177] 
	I0719 04:32:04.375410    8304 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:32:04.375410    8304 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\config.json ...
	I0719 04:32:04.382126    8304 out.go:177] * Starting "ha-062500-m02" control-plane node in "ha-062500" cluster
	I0719 04:32:04.385631    8304 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 04:32:04.385631    8304 cache.go:56] Caching tarball of preloaded images
	I0719 04:32:04.385631    8304 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 04:32:04.386531    8304 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 04:32:04.386531    8304 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\config.json ...
	I0719 04:32:04.389466    8304 start.go:360] acquireMachinesLock for ha-062500-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 04:32:04.390558    8304 start.go:364] duration metric: took 1.092ms to acquireMachinesLock for "ha-062500-m02"
	I0719 04:32:04.390696    8304 start.go:93] Provisioning new machine with config: &{Name:ha-062500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-062500 Namespace:default APIServerHAVIP:172.28.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.168.223 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 04:32:04.390696    8304 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0719 04:32:04.395665    8304 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 04:32:04.396663    8304 start.go:159] libmachine.API.Create for "ha-062500" (driver="hyperv")
	I0719 04:32:04.396663    8304 client.go:168] LocalClient.Create starting
	I0719 04:32:04.397202    8304 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0719 04:32:04.397499    8304 main.go:141] libmachine: Decoding PEM data...
	I0719 04:32:04.397499    8304 main.go:141] libmachine: Parsing certificate...
	I0719 04:32:04.397617    8304 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0719 04:32:04.397617    8304 main.go:141] libmachine: Decoding PEM data...
	I0719 04:32:04.397617    8304 main.go:141] libmachine: Parsing certificate...
	I0719 04:32:04.397617    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0719 04:32:06.298953    8304 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0719 04:32:06.300094    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:06.300094    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0719 04:32:08.095139    8304 main.go:141] libmachine: [stdout =====>] : False
	
	I0719 04:32:08.095139    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:08.095139    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0719 04:32:09.625371    8304 main.go:141] libmachine: [stdout =====>] : True
	
	I0719 04:32:09.625371    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:09.625460    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0719 04:32:13.347945    8304 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0719 04:32:13.347945    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:13.352053    8304 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 04:32:13.777804    8304 main.go:141] libmachine: Creating SSH key...
	I0719 04:32:14.037694    8304 main.go:141] libmachine: Creating VM...
	I0719 04:32:14.037830    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0719 04:32:17.000998    8304 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0719 04:32:17.000998    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:17.001084    8304 main.go:141] libmachine: Using switch "Default Switch"
	I0719 04:32:17.001163    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0719 04:32:18.764615    8304 main.go:141] libmachine: [stdout =====>] : True
	
	I0719 04:32:18.764615    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:18.764615    8304 main.go:141] libmachine: Creating VHD
	I0719 04:32:18.765025    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0719 04:32:22.666040    8304 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 96805E3E-94FF-481A-B83B-4C9A63A1B868
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0719 04:32:22.666040    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:22.666040    8304 main.go:141] libmachine: Writing magic tar header
	I0719 04:32:22.666040    8304 main.go:141] libmachine: Writing SSH key tar header
	I0719 04:32:22.675906    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0719 04:32:25.945431    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:32:25.945575    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:25.945575    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\disk.vhd' -SizeBytes 20000MB
	I0719 04:32:28.523659    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:32:28.523659    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:28.523848    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-062500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0719 04:32:32.202348    8304 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-062500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0719 04:32:32.202348    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:32.202429    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-062500-m02 -DynamicMemoryEnabled $false
	I0719 04:32:34.507336    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:32:34.508072    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:34.508229    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-062500-m02 -Count 2
	I0719 04:32:36.738293    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:32:36.738293    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:36.739090    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-062500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\boot2docker.iso'
	I0719 04:32:39.368162    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:32:39.368724    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:39.368724    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-062500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\disk.vhd'
	I0719 04:32:42.055680    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:32:42.055748    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:42.055748    8304 main.go:141] libmachine: Starting VM...
	I0719 04:32:42.055748    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-062500-m02
	I0719 04:32:45.177105    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:32:45.177105    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:45.177105    8304 main.go:141] libmachine: Waiting for host to start...
	I0719 04:32:45.177105    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:32:47.553788    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:32:47.554480    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:47.554480    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:32:50.127086    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:32:50.127086    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:51.128745    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:32:53.386161    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:32:53.386610    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:53.386610    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:32:55.953528    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:32:55.953528    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:56.953782    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:32:59.267522    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:32:59.267522    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:59.268144    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:01.867195    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:33:01.867598    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:02.875690    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:05.143930    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:05.143982    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:05.143982    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:07.759834    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:33:07.759834    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:08.761012    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:11.105424    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:11.105424    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:11.105424    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:13.774101    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:33:13.774101    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:13.774359    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:15.966951    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:15.968130    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:15.968130    8304 machine.go:94] provisionDockerMachine start ...
	I0719 04:33:15.968338    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:18.245918    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:18.246089    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:18.246177    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:20.863605    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:33:20.864414    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:20.869963    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:33:20.880042    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.171.55 22 <nil> <nil>}
	I0719 04:33:20.881052    8304 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 04:33:21.013217    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 04:33:21.013217    8304 buildroot.go:166] provisioning hostname "ha-062500-m02"
	I0719 04:33:21.013453    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:23.271386    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:23.271386    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:23.271386    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:25.882495    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:33:25.882495    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:25.887874    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:33:25.887913    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.171.55 22 <nil> <nil>}
	I0719 04:33:25.887913    8304 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-062500-m02 && echo "ha-062500-m02" | sudo tee /etc/hostname
	I0719 04:33:26.064397    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-062500-m02
	
	I0719 04:33:26.064513    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:28.330951    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:28.331953    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:28.332210    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:30.937238    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:33:30.937238    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:30.943341    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:33:30.944113    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.171.55 22 <nil> <nil>}
	I0719 04:33:30.944113    8304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-062500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-062500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-062500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:33:31.098166    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:33:31.098166    8304 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 04:33:31.098276    8304 buildroot.go:174] setting up certificates
	I0719 04:33:31.098276    8304 provision.go:84] configureAuth start
	I0719 04:33:31.098381    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:33.328370    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:33.328370    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:33.328370    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:35.920852    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:33:35.920852    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:35.921035    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:38.141107    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:38.141377    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:38.141377    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:40.743142    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:33:40.743825    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:40.743825    8304 provision.go:143] copyHostCerts
	I0719 04:33:40.744024    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0719 04:33:40.744580    8304 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0719 04:33:40.744580    8304 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0719 04:33:40.745165    8304 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 04:33:40.746335    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0719 04:33:40.746555    8304 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0719 04:33:40.746672    8304 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0719 04:33:40.747036    8304 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 04:33:40.748162    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0719 04:33:40.748350    8304 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0719 04:33:40.748350    8304 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0719 04:33:40.748350    8304 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 04:33:40.749893    8304 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-062500-m02 san=[127.0.0.1 172.28.171.55 ha-062500-m02 localhost minikube]
	I0719 04:33:40.832973    8304 provision.go:177] copyRemoteCerts
	I0719 04:33:40.843026    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:33:40.843026    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:43.006903    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:43.007430    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:43.007430    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:45.586850    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:33:45.586850    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:45.588009    8304 sshutil.go:53] new ssh client: &{IP:172.28.171.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\id_rsa Username:docker}
	I0719 04:33:45.697056    8304 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8539736s)
	I0719 04:33:45.697106    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0719 04:33:45.697636    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 04:33:45.744118    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0719 04:33:45.744613    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 04:33:45.790380    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0719 04:33:45.790932    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 04:33:45.836575    8304 provision.go:87] duration metric: took 14.7381305s to configureAuth
	I0719 04:33:45.836575    8304 buildroot.go:189] setting minikube options for container-runtime
	I0719 04:33:45.837392    8304 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:33:45.837392    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:47.997471    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:47.998259    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:47.998259    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:50.565843    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:33:50.566672    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:50.572072    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:33:50.572773    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.171.55 22 <nil> <nil>}
	I0719 04:33:50.572773    8304 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 04:33:50.713088    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 04:33:50.713162    8304 buildroot.go:70] root file system type: tmpfs
	I0719 04:33:50.713398    8304 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 04:33:50.713467    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:52.871204    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:52.871999    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:52.872067    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:55.498921    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:33:55.499145    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:55.503986    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:33:55.504607    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.171.55 22 <nil> <nil>}
	I0719 04:33:55.504607    8304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.168.223"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 04:33:55.662177    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.168.223
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 04:33:55.662349    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:57.829807    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:57.829807    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:57.830048    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:34:00.399637    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:34:00.400649    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:00.408146    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:34:00.409178    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.171.55 22 <nil> <nil>}
	I0719 04:34:00.409178    8304 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 04:34:02.637936    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 04:34:02.637936    8304 machine.go:97] duration metric: took 46.669269s to provisionDockerMachine
	I0719 04:34:02.637936    8304 client.go:171] duration metric: took 1m58.2399132s to LocalClient.Create
	I0719 04:34:02.637936    8304 start.go:167] duration metric: took 1m58.2399511s to libmachine.API.Create "ha-062500"
	I0719 04:34:02.637936    8304 start.go:293] postStartSetup for "ha-062500-m02" (driver="hyperv")
	I0719 04:34:02.637936    8304 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:34:02.650439    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:34:02.650439    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:34:04.821072    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:34:04.822078    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:04.822320    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:34:07.475069    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:34:07.475069    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:07.475577    8304 sshutil.go:53] new ssh client: &{IP:172.28.171.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\id_rsa Username:docker}
	I0719 04:34:07.595359    8304 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9448635s)
	I0719 04:34:07.606784    8304 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:34:07.613198    8304 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 04:34:07.613198    8304 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 04:34:07.613732    8304 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 04:34:07.614626    8304 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
	I0719 04:34:07.614626    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /etc/ssl/certs/96042.pem
	I0719 04:34:07.626810    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 04:34:07.642993    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
	I0719 04:34:07.694537    8304 start.go:296] duration metric: took 5.0565427s for postStartSetup
	I0719 04:34:07.697850    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:34:09.912066    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:34:09.912066    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:09.912747    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:34:12.507225    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:34:12.507225    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:12.507667    8304 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\config.json ...
	I0719 04:34:12.510126    8304 start.go:128] duration metric: took 2m8.1179575s to createHost
	I0719 04:34:12.510221    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:34:14.692063    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:34:14.692119    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:14.692119    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:34:17.297859    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:34:17.297859    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:17.303950    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:34:17.304843    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.171.55 22 <nil> <nil>}
	I0719 04:34:17.304843    8304 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 04:34:17.436003    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721363657.451249714
	
	I0719 04:34:17.436195    8304 fix.go:216] guest clock: 1721363657.451249714
	I0719 04:34:17.436195    8304 fix.go:229] Guest: 2024-07-19 04:34:17.451249714 +0000 UTC Remote: 2024-07-19 04:34:12.5101269 +0000 UTC m=+343.368425901 (delta=4.941122814s)
	I0719 04:34:17.436268    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:34:19.636508    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:34:19.636555    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:19.636625    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:34:22.236982    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:34:22.238004    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:22.243426    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:34:22.243426    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.171.55 22 <nil> <nil>}
	I0719 04:34:22.243426    8304 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721363657
	I0719 04:34:22.389651    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 04:34:17 UTC 2024
	
	I0719 04:34:22.389651    8304 fix.go:236] clock set: Fri Jul 19 04:34:17 UTC 2024
	 (err=<nil>)
	I0719 04:34:22.389651    8304 start.go:83] releasing machines lock for "ha-062500-m02", held for 2m17.9975065s
	I0719 04:34:22.390296    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:34:24.567457    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:34:24.567518    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:24.567642    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:34:27.167869    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:34:27.167869    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:27.172140    8304 out.go:177] * Found network options:
	I0719 04:34:27.176410    8304 out.go:177]   - NO_PROXY=172.28.168.223
	W0719 04:34:27.179041    8304 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 04:34:27.181073    8304 out.go:177]   - NO_PROXY=172.28.168.223
	W0719 04:34:27.183782    8304 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 04:34:27.185180    8304 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 04:34:27.187159    8304 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 04:34:27.187547    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:34:27.196167    8304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 04:34:27.196167    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:34:29.476081    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:34:29.476132    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:29.476132    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:34:29.508118    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:34:29.508188    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:29.508188    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:34:32.259057    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:34:32.260203    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:32.260741    8304 sshutil.go:53] new ssh client: &{IP:172.28.171.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\id_rsa Username:docker}
	I0719 04:34:32.288986    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:34:32.289401    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:32.289803    8304 sshutil.go:53] new ssh client: &{IP:172.28.171.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\id_rsa Username:docker}
	I0719 04:34:32.359244    8304 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1720252s)
	W0719 04:34:32.359244    8304 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 04:34:32.392908    8304 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1965774s)
	W0719 04:34:32.392908    8304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 04:34:32.404240    8304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:34:32.433447    8304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 04:34:32.433590    8304 start.go:495] detecting cgroup driver to use...
	I0719 04:34:32.433725    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:34:32.482405    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0719 04:34:32.498052    8304 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 04:34:32.498052    8304 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 04:34:32.518266    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 04:34:32.541147    8304 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 04:34:32.552815    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 04:34:32.587644    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 04:34:32.621594    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 04:34:32.651777    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 04:34:32.683090    8304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:34:32.714311    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 04:34:32.744836    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 04:34:32.775714    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 04:34:32.807380    8304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:34:32.835365    8304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:34:32.863030    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:34:33.068054    8304 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 04:34:33.102673    8304 start.go:495] detecting cgroup driver to use...
	I0719 04:34:33.115011    8304 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 04:34:33.158387    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:34:33.193643    8304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:34:33.242334    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:34:33.278542    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 04:34:33.316342    8304 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 04:34:33.378803    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 04:34:33.402685    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:34:33.447850    8304 ssh_runner.go:195] Run: which cri-dockerd
	I0719 04:34:33.467514    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 04:34:33.483586    8304 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 04:34:33.524283    8304 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 04:34:33.713836    8304 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 04:34:33.908580    8304 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 04:34:33.908689    8304 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 04:34:33.953596    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:34:34.166124    8304 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 04:34:36.744490    8304 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5783372s)
	I0719 04:34:36.756888    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 04:34:36.796399    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 04:34:36.837690    8304 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 04:34:37.036882    8304 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 04:34:37.256772    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:34:37.465459    8304 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 04:34:37.507073    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 04:34:37.543421    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:34:37.739417    8304 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 04:34:37.851653    8304 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 04:34:37.864055    8304 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 04:34:37.873758    8304 start.go:563] Will wait 60s for crictl version
	I0719 04:34:37.884712    8304 ssh_runner.go:195] Run: which crictl
	I0719 04:34:37.901145    8304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 04:34:37.956329    8304 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 04:34:37.964898    8304 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 04:34:38.004668    8304 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 04:34:38.042482    8304 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0719 04:34:38.045021    8304 out.go:177]   - env NO_PROXY=172.28.168.223
	I0719 04:34:38.047746    8304 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0719 04:34:38.051840    8304 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0719 04:34:38.051840    8304 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0719 04:34:38.051840    8304 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0719 04:34:38.051840    8304 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:e9:18 Flags:up|broadcast|multicast|running}
	I0719 04:34:38.054486    8304 ip.go:210] interface addr: fe80::1dc5:162d:cec2:b9bd/64
	I0719 04:34:38.054486    8304 ip.go:210] interface addr: 172.28.160.1/20
	I0719 04:34:38.065447    8304 ssh_runner.go:195] Run: grep 172.28.160.1	host.minikube.internal$ /etc/hosts
	I0719 04:34:38.072444    8304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:34:38.094111    8304 mustload.go:65] Loading cluster: ha-062500
	I0719 04:34:38.094530    8304 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:34:38.095513    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:34:40.262701    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:34:40.262701    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:40.262701    8304 host.go:66] Checking if "ha-062500" exists ...
	I0719 04:34:40.264001    8304 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500 for IP: 172.28.171.55
	I0719 04:34:40.264001    8304 certs.go:194] generating shared ca certs ...
	I0719 04:34:40.264103    8304 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:34:40.264661    8304 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0719 04:34:40.265284    8304 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0719 04:34:40.265455    8304 certs.go:256] generating profile certs ...
	I0719 04:34:40.266198    8304 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\client.key
	I0719 04:34:40.266467    8304 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.37640fbc
	I0719 04:34:40.266690    8304 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.37640fbc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.168.223 172.28.171.55 172.28.175.254]
	I0719 04:34:40.343987    8304 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.37640fbc ...
	I0719 04:34:40.343987    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.37640fbc: {Name:mkd78f4da0d794c5fe5aee03af6db8c88c496c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:34:40.344928    8304 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.37640fbc ...
	I0719 04:34:40.344928    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.37640fbc: {Name:mkf2ec88ef353924c4d5486fd8616e188139fa66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:34:40.345964    8304 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.37640fbc -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt
	I0719 04:34:40.359866    8304 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.37640fbc -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key
	I0719 04:34:40.361167    8304 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key
	I0719 04:34:40.361167    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 04:34:40.361316    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0719 04:34:40.361316    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 04:34:40.361316    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 04:34:40.361316    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 04:34:40.361884    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 04:34:40.361884    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 04:34:40.361884    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 04:34:40.362822    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem (1338 bytes)
	W0719 04:34:40.363434    8304 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604_empty.pem, impossibly tiny 0 bytes
	I0719 04:34:40.363671    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0719 04:34:40.363879    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0719 04:34:40.364350    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0719 04:34:40.364649    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0719 04:34:40.365268    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem (1708 bytes)
	I0719 04:34:40.365268    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /usr/share/ca-certificates/96042.pem
	I0719 04:34:40.365268    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:34:40.365268    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem -> /usr/share/ca-certificates/9604.pem
	I0719 04:34:40.365936    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:34:42.572033    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:34:42.572033    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:42.572033    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:34:45.194645    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:34:45.195202    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:45.195800    8304 sshutil.go:53] new ssh client: &{IP:172.28.168.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\id_rsa Username:docker}
	I0719 04:34:45.295317    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0719 04:34:45.303478    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0719 04:34:45.336347    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0719 04:34:45.343055    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0719 04:34:45.373015    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0719 04:34:45.379629    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0719 04:34:45.416639    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0719 04:34:45.423556    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0719 04:34:45.453544    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0719 04:34:45.460767    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0719 04:34:45.496227    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0719 04:34:45.503532    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0719 04:34:45.522638    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 04:34:45.571865    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 04:34:45.619630    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 04:34:45.665337    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 04:34:45.709951    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0719 04:34:45.760470    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 04:34:45.804636    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 04:34:45.849004    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 04:34:45.897086    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /usr/share/ca-certificates/96042.pem (1708 bytes)
	I0719 04:34:45.940879    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 04:34:45.985517    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem --> /usr/share/ca-certificates/9604.pem (1338 bytes)
	I0719 04:34:46.031305    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0719 04:34:46.069125    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0719 04:34:46.100426    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0719 04:34:46.130850    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0719 04:34:46.162096    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0719 04:34:46.193989    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0719 04:34:46.225184    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0719 04:34:46.271254    8304 ssh_runner.go:195] Run: openssl version
	I0719 04:34:46.290344    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96042.pem && ln -fs /usr/share/ca-certificates/96042.pem /etc/ssl/certs/96042.pem"
	I0719 04:34:46.319149    8304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96042.pem
	I0719 04:34:46.327467    8304 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:46 /usr/share/ca-certificates/96042.pem
	I0719 04:34:46.340009    8304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96042.pem
	I0719 04:34:46.360967    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96042.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 04:34:46.391711    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 04:34:46.422919    8304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:34:46.429831    8304 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:34:46.441753    8304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:34:46.460915    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 04:34:46.494175    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9604.pem && ln -fs /usr/share/ca-certificates/9604.pem /etc/ssl/certs/9604.pem"
	I0719 04:34:46.526282    8304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9604.pem
	I0719 04:34:46.532052    8304 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:46 /usr/share/ca-certificates/9604.pem
	I0719 04:34:46.544154    8304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9604.pem
	I0719 04:34:46.565716    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9604.pem /etc/ssl/certs/51391683.0"
	I0719 04:34:46.618565    8304 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 04:34:46.625848    8304 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 04:34:46.626396    8304 kubeadm.go:934] updating node {m02 172.28.171.55 8443 v1.30.3 docker true true} ...
	I0719 04:34:46.626748    8304 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-062500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.171.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-062500 Namespace:default APIServerHAVIP:172.28.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 04:34:46.626748    8304 kube-vip.go:115] generating kube-vip config ...
	I0719 04:34:46.640179    8304 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 04:34:46.672859    8304 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 04:34:46.672956    8304 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.175.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 04:34:46.685240    8304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 04:34:46.704965    8304 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0719 04:34:46.716937    8304 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0719 04:34:46.736936    8304 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl
	I0719 04:34:46.736936    8304 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm
	I0719 04:34:46.736936    8304 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet
	I0719 04:34:47.961315    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 04:34:47.973342    8304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 04:34:47.981760    8304 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0719 04:34:47.981760    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0719 04:34:49.392286    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 04:34:49.403963    8304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 04:34:49.414004    8304 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0719 04:34:49.414479    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0719 04:34:51.192004    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:34:51.217758    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 04:34:51.228974    8304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 04:34:51.236601    8304 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0719 04:34:51.236842    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0719 04:34:52.096677    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0719 04:34:52.115556    8304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0719 04:34:52.150287    8304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 04:34:52.184011    8304 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 04:34:52.239444    8304 ssh_runner.go:195] Run: grep 172.28.175.254	control-plane.minikube.internal$ /etc/hosts
	I0719 04:34:52.245595    8304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.175.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:34:52.279143    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:34:52.488994    8304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:34:52.519075    8304 host.go:66] Checking if "ha-062500" exists ...
	I0719 04:34:52.520356    8304 start.go:317] joinCluster: &{Name:ha-062500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-062500 Namespace:default APIServerHAVIP:172.28.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.168.223 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.171.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:34:52.520666    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0719 04:34:52.520752    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:34:54.724690    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:34:54.724690    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:54.724844    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:34:57.431278    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:34:57.431278    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:57.432734    8304 sshutil.go:53] new ssh client: &{IP:172.28.168.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\id_rsa Username:docker}
	I0719 04:34:57.637984    8304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.117142s)
	I0719 04:34:57.637984    8304 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.28.171.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 04:34:57.638173    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token de3dy6.f83qklug2r5n0div --discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-062500-m02 --control-plane --apiserver-advertise-address=172.28.171.55 --apiserver-bind-port=8443"
	I0719 04:35:43.653172    8304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token de3dy6.f83qklug2r5n0div --discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-062500-m02 --control-plane --apiserver-advertise-address=172.28.171.55 --apiserver-bind-port=8443": (46.0144113s)
	I0719 04:35:43.653261    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0719 04:35:44.446397    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-062500-m02 minikube.k8s.io/updated_at=2024_07_19T04_35_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-062500 minikube.k8s.io/primary=false
	I0719 04:35:44.619269    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-062500-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0719 04:35:44.770923    8304 start.go:319] duration metric: took 52.2499661s to joinCluster
	I0719 04:35:44.771244    8304 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.28.171.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 04:35:44.772095    8304 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:35:44.776775    8304 out.go:177] * Verifying Kubernetes components...
	I0719 04:35:44.797475    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:35:45.168637    8304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:35:45.200199    8304 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 04:35:45.201025    8304 kapi.go:59] client config for ha-062500: &rest.Config{Host:"https://172.28.175.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-062500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-062500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0719 04:35:45.201188    8304 kubeadm.go:483] Overriding stale ClientConfig host https://172.28.175.254:8443 with https://172.28.168.223:8443
	I0719 04:35:45.201801    8304 node_ready.go:35] waiting up to 6m0s for node "ha-062500-m02" to be "Ready" ...
	I0719 04:35:45.201801    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:45.202346    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:45.202346    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:45.202346    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:45.218315    8304 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0719 04:35:45.710071    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:45.710071    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:45.710071    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:45.710071    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:45.716745    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:35:46.214606    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:46.214898    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:46.214898    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:46.214898    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:46.219381    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:35:46.715582    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:46.715692    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:46.715783    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:46.715783    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:46.723072    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:35:47.207702    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:47.207702    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:47.207702    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:47.207702    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:47.213675    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:47.215151    8304 node_ready.go:53] node "ha-062500-m02" has status "Ready":"False"
	I0719 04:35:47.713046    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:47.713453    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:47.713453    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:47.713512    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:47.719298    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:48.207499    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:48.207614    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:48.207614    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:48.207614    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:48.212852    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:48.717437    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:48.717641    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:48.717641    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:48.717690    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:48.722388    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:35:49.209427    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:49.209427    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:49.209427    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:49.209427    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:49.214319    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:35:49.215251    8304 node_ready.go:53] node "ha-062500-m02" has status "Ready":"False"
	I0719 04:35:49.711671    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:49.711891    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:49.711891    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:49.711891    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:49.716297    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:35:50.204111    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:50.204363    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:50.204363    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:50.204363    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:50.213202    8304 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:35:50.710613    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:50.710613    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:50.710613    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:50.710613    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:50.717494    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:35:51.202100    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:51.202100    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:51.202100    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:51.202100    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:51.207987    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:51.706696    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:51.706782    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:51.706782    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:51.706782    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:51.713182    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:35:51.714755    8304 node_ready.go:53] node "ha-062500-m02" has status "Ready":"False"
	I0719 04:35:52.207754    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:52.207754    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:52.207754    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:52.207754    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:52.228509    8304 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0719 04:35:52.702366    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:52.702366    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:52.702366    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:52.702366    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:52.707944    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:53.207108    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:53.207203    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:53.207203    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:53.207203    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:53.215282    8304 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:35:53.710087    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:53.710087    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:53.710191    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:53.710191    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:53.714840    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:35:53.716061    8304 node_ready.go:53] node "ha-062500-m02" has status "Ready":"False"
	I0719 04:35:54.209916    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:54.210188    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:54.210188    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:54.210188    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:54.215300    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:54.715291    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:54.715590    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:54.715590    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:54.715590    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:54.720382    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:35:55.202693    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:55.202822    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:55.202822    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:55.202822    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:55.208371    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:55.704569    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:55.704702    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:55.704702    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:55.704702    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:55.710143    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:56.204096    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:56.204234    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:56.204234    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:56.204234    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:56.209775    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:56.210655    8304 node_ready.go:53] node "ha-062500-m02" has status "Ready":"False"
	I0719 04:35:56.705230    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:56.705353    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:56.705353    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:56.705353    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:56.713700    8304 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:35:57.208301    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:57.208409    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:57.208409    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:57.208409    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:57.213392    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:35:57.706667    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:57.706913    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:57.706913    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:57.706913    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:57.713423    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:35:58.209636    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:58.209894    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:58.209894    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:58.209894    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:58.215467    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:58.216750    8304 node_ready.go:53] node "ha-062500-m02" has status "Ready":"False"
	I0719 04:35:58.702561    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:58.702561    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:58.702561    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:58.702561    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:58.708083    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:59.207508    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:59.207853    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:59.207853    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:59.207853    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:59.218632    8304 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0719 04:35:59.707799    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:59.707799    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:59.708088    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:59.708088    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:59.716734    8304 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:36:00.205102    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:00.205102    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:00.205202    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:00.205202    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:00.210494    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:00.706041    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:00.706121    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:00.706121    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:00.706121    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:00.711370    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:00.712552    8304 node_ready.go:53] node "ha-062500-m02" has status "Ready":"False"
	I0719 04:36:01.206744    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:01.206835    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:01.206835    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:01.206835    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:01.212690    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:01.708054    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:01.708054    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:01.708054    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:01.708194    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:01.714621    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:36:02.209534    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:02.209534    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:02.209534    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:02.209534    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:02.215678    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:36:02.714208    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:02.714208    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:02.714338    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:02.714338    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:02.719683    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:02.720970    8304 node_ready.go:53] node "ha-062500-m02" has status "Ready":"False"
	I0719 04:36:03.216484    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:03.216484    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:03.216484    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:03.216484    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:03.223168    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:03.714863    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:03.715155    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:03.715155    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:03.715155    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:03.721067    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:04.213668    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:04.213737    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:04.213737    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:04.213737    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:04.218988    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:04.710753    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:04.710753    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:04.710753    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:04.710753    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:04.716883    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:36:05.210753    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:05.210753    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:05.210753    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:05.210753    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:05.216594    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:05.217536    8304 node_ready.go:53] node "ha-062500-m02" has status "Ready":"False"
	I0719 04:36:05.702967    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:05.702967    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:05.702967    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:05.703084    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:05.709537    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:36:06.215815    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:06.215815    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:06.215815    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:06.215815    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:06.222651    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:36:06.717486    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:06.717486    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:06.717486    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:06.717486    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:06.723114    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:07.216094    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:07.216094    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:07.216094    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:07.216094    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:07.221681    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:07.222590    8304 node_ready.go:53] node "ha-062500-m02" has status "Ready":"False"
	I0719 04:36:07.715739    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:07.715739    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:07.715739    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:07.715739    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:07.721206    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:08.202556    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:08.202556    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.202556    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.202556    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.207208    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:36:08.209218    8304 node_ready.go:49] node "ha-062500-m02" has status "Ready":"True"
	I0719 04:36:08.209365    8304 node_ready.go:38] duration metric: took 23.0071527s for node "ha-062500-m02" to be "Ready" ...
	I0719 04:36:08.209365    8304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 04:36:08.209601    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods
	I0719 04:36:08.209601    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.209601    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.209691    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.242117    8304 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0719 04:36:08.252132    8304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jb6nt" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.252132    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jb6nt
	I0719 04:36:08.252132    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.252132    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.252132    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.260873    8304 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:36:08.261892    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:08.261892    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.261892    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.261892    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.275407    8304 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0719 04:36:08.275915    8304 pod_ready.go:92] pod "coredns-7db6d8ff4d-jb6nt" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:08.275915    8304 pod_ready.go:81] duration metric: took 23.7831ms for pod "coredns-7db6d8ff4d-jb6nt" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.275915    8304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jpmb4" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.276560    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jpmb4
	I0719 04:36:08.276560    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.276560    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.276560    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.282646    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:36:08.283139    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:08.283717    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.283717    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.283717    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.294749    8304 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0719 04:36:08.295020    8304 pod_ready.go:92] pod "coredns-7db6d8ff4d-jpmb4" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:08.295020    8304 pod_ready.go:81] duration metric: took 19.1044ms for pod "coredns-7db6d8ff4d-jpmb4" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.295020    8304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.295569    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-062500
	I0719 04:36:08.295569    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.295569    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.295569    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.301985    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:36:08.303555    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:08.303555    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.303555    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.303555    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.308160    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:36:08.309511    8304 pod_ready.go:92] pod "etcd-ha-062500" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:08.309511    8304 pod_ready.go:81] duration metric: took 14.4904ms for pod "etcd-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.309511    8304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.309511    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-062500-m02
	I0719 04:36:08.309511    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.309511    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.309511    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.316942    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:36:08.318163    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:08.318220    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.318277    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.318277    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.321217    8304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:36:08.322246    8304 pod_ready.go:92] pod "etcd-ha-062500-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:08.322246    8304 pod_ready.go:81] duration metric: took 12.7357ms for pod "etcd-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.322246    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.405266    8304 request.go:629] Waited for 82.6105ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-062500
	I0719 04:36:08.405464    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-062500
	I0719 04:36:08.405496    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.405533    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.405533    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.410655    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:08.609122    8304 request.go:629] Waited for 196.3904ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:08.609300    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:08.609374    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.609374    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.609374    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.613530    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:36:08.614448    8304 pod_ready.go:92] pod "kube-apiserver-ha-062500" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:08.614448    8304 pod_ready.go:81] duration metric: took 292.1979ms for pod "kube-apiserver-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.614448    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.811671    8304 request.go:629] Waited for 197.2205ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-062500-m02
	I0719 04:36:08.811998    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-062500-m02
	I0719 04:36:08.811998    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.811998    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.812241    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.817283    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:09.015873    8304 request.go:629] Waited for 197.1283ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:09.016143    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:09.016179    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:09.016179    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:09.016179    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:09.025188    8304 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0719 04:36:09.025738    8304 pod_ready.go:92] pod "kube-apiserver-ha-062500-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:09.025782    8304 pod_ready.go:81] duration metric: took 411.3295ms for pod "kube-apiserver-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:09.025782    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:09.217760    8304 request.go:629] Waited for 191.7564ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-062500
	I0719 04:36:09.217837    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-062500
	I0719 04:36:09.217837    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:09.217837    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:09.217923    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:09.223564    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:09.405589    8304 request.go:629] Waited for 179.988ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:09.405883    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:09.405883    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:09.405883    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:09.405883    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:09.411526    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:09.412457    8304 pod_ready.go:92] pod "kube-controller-manager-ha-062500" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:09.412457    8304 pod_ready.go:81] duration metric: took 386.671ms for pod "kube-controller-manager-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:09.412457    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:09.607957    8304 request.go:629] Waited for 195.4058ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-062500-m02
	I0719 04:36:09.608435    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-062500-m02
	I0719 04:36:09.608435    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:09.608435    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:09.608435    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:09.616697    8304 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:36:09.813007    8304 request.go:629] Waited for 195.0688ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:09.813429    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:09.813501    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:09.813501    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:09.813501    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:09.820103    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:36:09.820679    8304 pod_ready.go:92] pod "kube-controller-manager-ha-062500-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:09.820679    8304 pod_ready.go:81] duration metric: took 408.2167ms for pod "kube-controller-manager-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:09.820679    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rtdgs" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:10.016232    8304 request.go:629] Waited for 195.2432ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rtdgs
	I0719 04:36:10.016790    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rtdgs
	I0719 04:36:10.016790    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:10.016790    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:10.016790    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:10.024622    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:36:10.204560    8304 request.go:629] Waited for 178.5899ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:10.204741    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:10.204741    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:10.204741    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:10.204741    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:10.210656    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:10.211083    8304 pod_ready.go:92] pod "kube-proxy-rtdgs" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:10.211083    8304 pod_ready.go:81] duration metric: took 390.3997ms for pod "kube-proxy-rtdgs" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:10.211083    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wv8bn" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:10.408581    8304 request.go:629] Waited for 197.2106ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wv8bn
	I0719 04:36:10.408738    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wv8bn
	I0719 04:36:10.408738    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:10.408738    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:10.408794    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:10.417445    8304 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:36:10.612414    8304 request.go:629] Waited for 193.5144ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:10.613273    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:10.613515    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:10.613515    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:10.613515    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:10.619317    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:10.621022    8304 pod_ready.go:92] pod "kube-proxy-wv8bn" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:10.621077    8304 pod_ready.go:81] duration metric: took 409.9888ms for pod "kube-proxy-wv8bn" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:10.621077    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:10.816150    8304 request.go:629] Waited for 194.9521ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-062500
	I0719 04:36:10.816150    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-062500
	I0719 04:36:10.816150    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:10.816150    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:10.816150    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:10.821728    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:11.004343    8304 request.go:629] Waited for 181.7491ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:11.004519    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:11.004519    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:11.004519    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:11.004519    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:11.010254    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:11.011846    8304 pod_ready.go:92] pod "kube-scheduler-ha-062500" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:11.011846    8304 pod_ready.go:81] duration metric: took 390.7024ms for pod "kube-scheduler-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:11.011846    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:11.209740    8304 request.go:629] Waited for 197.6481ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-062500-m02
	I0719 04:36:11.209851    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-062500-m02
	I0719 04:36:11.209851    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:11.210023    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:11.210083    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:11.215922    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:11.412579    8304 request.go:629] Waited for 195.2905ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:11.412807    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:11.412807    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:11.412807    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:11.412807    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:11.420217    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:36:11.420943    8304 pod_ready.go:92] pod "kube-scheduler-ha-062500-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:11.420943    8304 pod_ready.go:81] duration metric: took 409.0925ms for pod "kube-scheduler-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:11.420943    8304 pod_ready.go:38] duration metric: took 3.2115411s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 04:36:11.421480    8304 api_server.go:52] waiting for apiserver process to appear ...
	I0719 04:36:11.435913    8304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:36:11.466287    8304 api_server.go:72] duration metric: took 26.6945589s to wait for apiserver process to appear ...
	I0719 04:36:11.466287    8304 api_server.go:88] waiting for apiserver healthz status ...
	I0719 04:36:11.466394    8304 api_server.go:253] Checking apiserver healthz at https://172.28.168.223:8443/healthz ...
	I0719 04:36:11.476476    8304 api_server.go:279] https://172.28.168.223:8443/healthz returned 200:
	ok
	I0719 04:36:11.476476    8304 round_trippers.go:463] GET https://172.28.168.223:8443/version
	I0719 04:36:11.476476    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:11.476476    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:11.476476    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:11.478371    8304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 04:36:11.478842    8304 api_server.go:141] control plane version: v1.30.3
	I0719 04:36:11.478842    8304 api_server.go:131] duration metric: took 12.4898ms to wait for apiserver health ...
	I0719 04:36:11.478842    8304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 04:36:11.615780    8304 request.go:629] Waited for 136.6917ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods
	I0719 04:36:11.616046    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods
	I0719 04:36:11.616046    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:11.616046    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:11.616152    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:11.625471    8304 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0719 04:36:11.632540    8304 system_pods.go:59] 17 kube-system pods found
	I0719 04:36:11.632540    8304 system_pods.go:61] "coredns-7db6d8ff4d-jb6nt" [799dd902-ac1e-4264-91b3-18bdfcd3c8d6] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "coredns-7db6d8ff4d-jpmb4" [f08afb24-1862-49cd-9065-fd21c96614ca] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "etcd-ha-062500" [7fcd86be-7022-4c7c-8144-e2537879c108] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "etcd-ha-062500-m02" [d7896def-bce8-4197-8016-90a7e745f68c] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kindnet-sk9jr" [06a7499a-0467-433d-9e65-5352dec711cf] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kindnet-xw86l" [8513df89-57a9-4e7a-b30f-df6c7ef5ed58] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kube-apiserver-ha-062500" [495cdc56-2af6-4ceb-acee-26b9bc09d268] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kube-apiserver-ha-062500-m02" [f880cb8b-d5aa-4141-8031-26951f630b43] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kube-controller-manager-ha-062500" [72ca647c-6a15-4408-9bc7-ba1be775d35a] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kube-controller-manager-ha-062500-m02" [031f15e6-c214-44e4-88f7-f7636f1f4a5e] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kube-proxy-rtdgs" [5c014afc-3ab0-4d20-83b6-adbb9a6133ec] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kube-proxy-wv8bn" [75f8ca14-0f7c-4e85-884c-b55161236c22] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kube-scheduler-ha-062500" [bc127693-7c90-4778-bef4-a9aa231e89a8] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kube-scheduler-ha-062500-m02" [37551193-9128-4afd-9653-1639d1727249] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kube-vip-ha-062500" [87843ee5-6fdf-473a-8818-47b1927340d6] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kube-vip-ha-062500-m02" [8ce744ae-1492-4359-860f-f7ff13977733] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "storage-provisioner" [d029a307-143b-4ef5-8619-f06e267d756c] Running
	I0719 04:36:11.632540    8304 system_pods.go:74] duration metric: took 153.6957ms to wait for pod list to return data ...
	I0719 04:36:11.632540    8304 default_sa.go:34] waiting for default service account to be created ...
	I0719 04:36:11.817888    8304 request.go:629] Waited for 185.3462ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/default/serviceaccounts
	I0719 04:36:11.818166    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/default/serviceaccounts
	I0719 04:36:11.818166    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:11.818166    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:11.818166    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:11.822619    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:36:11.823848    8304 default_sa.go:45] found service account: "default"
	I0719 04:36:11.823924    8304 default_sa.go:55] duration metric: took 191.3823ms for default service account to be created ...
	I0719 04:36:11.823924    8304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 04:36:12.004093    8304 request.go:629] Waited for 179.7796ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods
	I0719 04:36:12.004093    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods
	I0719 04:36:12.004408    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:12.004408    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:12.004408    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:12.013695    8304 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:36:12.022167    8304 system_pods.go:86] 17 kube-system pods found
	I0719 04:36:12.022763    8304 system_pods.go:89] "coredns-7db6d8ff4d-jb6nt" [799dd902-ac1e-4264-91b3-18bdfcd3c8d6] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "coredns-7db6d8ff4d-jpmb4" [f08afb24-1862-49cd-9065-fd21c96614ca] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "etcd-ha-062500" [7fcd86be-7022-4c7c-8144-e2537879c108] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "etcd-ha-062500-m02" [d7896def-bce8-4197-8016-90a7e745f68c] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "kindnet-sk9jr" [06a7499a-0467-433d-9e65-5352dec711cf] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "kindnet-xw86l" [8513df89-57a9-4e7a-b30f-df6c7ef5ed58] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "kube-apiserver-ha-062500" [495cdc56-2af6-4ceb-acee-26b9bc09d268] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "kube-apiserver-ha-062500-m02" [f880cb8b-d5aa-4141-8031-26951f630b43] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "kube-controller-manager-ha-062500" [72ca647c-6a15-4408-9bc7-ba1be775d35a] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "kube-controller-manager-ha-062500-m02" [031f15e6-c214-44e4-88f7-f7636f1f4a5e] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "kube-proxy-rtdgs" [5c014afc-3ab0-4d20-83b6-adbb9a6133ec] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "kube-proxy-wv8bn" [75f8ca14-0f7c-4e85-884c-b55161236c22] Running
	I0719 04:36:12.022895    8304 system_pods.go:89] "kube-scheduler-ha-062500" [bc127693-7c90-4778-bef4-a9aa231e89a8] Running
	I0719 04:36:12.022895    8304 system_pods.go:89] "kube-scheduler-ha-062500-m02" [37551193-9128-4afd-9653-1639d1727249] Running
	I0719 04:36:12.022895    8304 system_pods.go:89] "kube-vip-ha-062500" [87843ee5-6fdf-473a-8818-47b1927340d6] Running
	I0719 04:36:12.022895    8304 system_pods.go:89] "kube-vip-ha-062500-m02" [8ce744ae-1492-4359-860f-f7ff13977733] Running
	I0719 04:36:12.022895    8304 system_pods.go:89] "storage-provisioner" [d029a307-143b-4ef5-8619-f06e267d756c] Running
	I0719 04:36:12.022895    8304 system_pods.go:126] duration metric: took 198.9688ms to wait for k8s-apps to be running ...
	I0719 04:36:12.022895    8304 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 04:36:12.033431    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:36:12.062659    8304 system_svc.go:56] duration metric: took 39.7628ms WaitForService to wait for kubelet
	I0719 04:36:12.062786    8304 kubeadm.go:582] duration metric: took 27.2910511s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:36:12.062786    8304 node_conditions.go:102] verifying NodePressure condition ...
	I0719 04:36:12.210547    8304 request.go:629] Waited for 147.4395ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes
	I0719 04:36:12.210664    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes
	I0719 04:36:12.210664    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:12.210766    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:12.210766    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:12.216164    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:12.217572    8304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 04:36:12.217572    8304 node_conditions.go:123] node cpu capacity is 2
	I0719 04:36:12.217572    8304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 04:36:12.217572    8304 node_conditions.go:123] node cpu capacity is 2
	I0719 04:36:12.217676    8304 node_conditions.go:105] duration metric: took 154.8883ms to run NodePressure ...
	I0719 04:36:12.217676    8304 start.go:241] waiting for startup goroutines ...
	I0719 04:36:12.217742    8304 start.go:255] writing updated cluster config ...
	I0719 04:36:12.222413    8304 out.go:177] 
	I0719 04:36:12.238547    8304 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:36:12.238816    8304 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\config.json ...
	I0719 04:36:12.244980    8304 out.go:177] * Starting "ha-062500-m03" control-plane node in "ha-062500" cluster
	I0719 04:36:12.248953    8304 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 04:36:12.248953    8304 cache.go:56] Caching tarball of preloaded images
	I0719 04:36:12.248953    8304 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 04:36:12.249496    8304 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 04:36:12.249669    8304 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\config.json ...
	I0719 04:36:12.254095    8304 start.go:360] acquireMachinesLock for ha-062500-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 04:36:12.254095    8304 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-062500-m03"
	I0719 04:36:12.254095    8304 start.go:93] Provisioning new machine with config: &{Name:ha-062500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-062500 Namespace:default APIServerHAVIP:172.28.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.168.223 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.171.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 04:36:12.254095    8304 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0719 04:36:12.257899    8304 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 04:36:12.259172    8304 start.go:159] libmachine.API.Create for "ha-062500" (driver="hyperv")
	I0719 04:36:12.259172    8304 client.go:168] LocalClient.Create starting
	I0719 04:36:12.259332    8304 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0719 04:36:12.259332    8304 main.go:141] libmachine: Decoding PEM data...
	I0719 04:36:12.259879    8304 main.go:141] libmachine: Parsing certificate...
	I0719 04:36:12.259879    8304 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0719 04:36:12.259879    8304 main.go:141] libmachine: Decoding PEM data...
	I0719 04:36:12.259879    8304 main.go:141] libmachine: Parsing certificate...
	I0719 04:36:12.260446    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0719 04:36:14.208710    8304 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0719 04:36:14.209209    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:14.209313    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0719 04:36:16.006467    8304 main.go:141] libmachine: [stdout =====>] : False
	
	I0719 04:36:16.006467    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:16.006467    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0719 04:36:17.555349    8304 main.go:141] libmachine: [stdout =====>] : True
	
	I0719 04:36:17.556030    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:17.556100    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0719 04:36:21.440207    8304 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0719 04:36:21.440290    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:21.443219    8304 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 04:36:21.895735    8304 main.go:141] libmachine: Creating SSH key...
	I0719 04:36:21.981554    8304 main.go:141] libmachine: Creating VM...
	I0719 04:36:21.981554    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0719 04:36:25.179617    8304 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0719 04:36:25.179780    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:25.179855    8304 main.go:141] libmachine: Using switch "Default Switch"
	I0719 04:36:25.179855    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0719 04:36:27.056358    8304 main.go:141] libmachine: [stdout =====>] : True
	
	I0719 04:36:27.057242    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:27.057242    8304 main.go:141] libmachine: Creating VHD
	I0719 04:36:27.057376    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0719 04:36:31.106456    8304 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 02FB9D75-0A7D-44C9-8DB3-21F70D7B66F0
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0719 04:36:31.106456    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:31.106456    8304 main.go:141] libmachine: Writing magic tar header
	I0719 04:36:31.107446    8304 main.go:141] libmachine: Writing SSH key tar header
	I0719 04:36:31.117449    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0719 04:36:34.495264    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:36:34.495810    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:34.495974    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\disk.vhd' -SizeBytes 20000MB
	I0719 04:36:37.222989    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:36:37.222989    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:37.223127    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-062500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0719 04:36:41.011818    8304 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-062500-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0719 04:36:41.011920    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:41.012001    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-062500-m03 -DynamicMemoryEnabled $false
	I0719 04:36:43.428664    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:36:43.428664    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:43.429652    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-062500-m03 -Count 2
	I0719 04:36:45.727991    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:36:45.727991    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:45.727991    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-062500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\boot2docker.iso'
	I0719 04:36:48.423800    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:36:48.424858    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:48.424858    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-062500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\disk.vhd'
	I0719 04:36:51.195380    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:36:51.195664    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:51.195664    8304 main.go:141] libmachine: Starting VM...
	I0719 04:36:51.195664    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-062500-m03
	I0719 04:36:54.445105    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:36:54.445105    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:54.445105    8304 main.go:141] libmachine: Waiting for host to start...
	I0719 04:36:54.445799    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:36:56.847637    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:36:56.847830    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:56.847912    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:36:59.516916    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:36:59.516916    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:00.518662    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:02.825365    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:02.825365    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:02.825365    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:37:05.467277    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:37:05.468061    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:06.476755    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:08.807598    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:08.807654    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:08.807654    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:37:11.452138    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:37:11.452138    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:12.457214    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:14.782848    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:14.783419    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:14.783419    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:37:17.417851    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:37:17.417851    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:18.423958    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:20.765749    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:20.765749    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:20.765894    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:37:23.404281    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:37:23.404281    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:23.404809    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:25.661786    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:25.661786    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:25.661786    8304 machine.go:94] provisionDockerMachine start ...
	I0719 04:37:25.662745    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:27.938786    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:27.938786    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:27.939691    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:37:30.635832    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:37:30.636117    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:30.641890    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:37:30.652607    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.161.140 22 <nil> <nil>}
	I0719 04:37:30.652607    8304 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 04:37:30.781326    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 04:37:30.781414    8304 buildroot.go:166] provisioning hostname "ha-062500-m03"
	I0719 04:37:30.781488    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:33.005461    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:33.005461    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:33.005461    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:37:35.650548    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:37:35.651158    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:35.656085    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:37:35.656852    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.161.140 22 <nil> <nil>}
	I0719 04:37:35.656852    8304 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-062500-m03 && echo "ha-062500-m03" | sudo tee /etc/hostname
	I0719 04:37:35.808581    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-062500-m03
	
	I0719 04:37:35.808581    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:38.046580    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:38.047154    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:38.047221    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:37:40.688863    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:37:40.689021    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:40.694485    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:37:40.695187    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.161.140 22 <nil> <nil>}
	I0719 04:37:40.695187    8304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-062500-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-062500-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-062500-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:37:40.830146    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:37:40.830261    8304 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 04:37:40.830294    8304 buildroot.go:174] setting up certificates
	I0719 04:37:40.830320    8304 provision.go:84] configureAuth start
	I0719 04:37:40.830320    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:43.066302    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:43.066302    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:43.066302    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:37:45.680184    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:37:45.680184    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:45.680343    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:47.886215    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:47.886215    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:47.886215    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:37:50.532283    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:37:50.532283    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:50.532370    8304 provision.go:143] copyHostCerts
	I0719 04:37:50.532370    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0719 04:37:50.532370    8304 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0719 04:37:50.532370    8304 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0719 04:37:50.533295    8304 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 04:37:50.534425    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0719 04:37:50.534677    8304 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0719 04:37:50.534765    8304 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0719 04:37:50.535157    8304 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 04:37:50.536213    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0719 04:37:50.536213    8304 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0719 04:37:50.536213    8304 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0719 04:37:50.536831    8304 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 04:37:50.537931    8304 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-062500-m03 san=[127.0.0.1 172.28.161.140 ha-062500-m03 localhost minikube]
	I0719 04:37:50.711408    8304 provision.go:177] copyRemoteCerts
	I0719 04:37:50.721871    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:37:50.721871    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:52.930221    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:52.930221    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:52.930221    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:37:55.591827    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:37:55.592133    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:55.592651    8304 sshutil.go:53] new ssh client: &{IP:172.28.161.140 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\id_rsa Username:docker}
	I0719 04:37:55.696752    8304 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.974823s)
	I0719 04:37:55.696752    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0719 04:37:55.697384    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 04:37:55.744928    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0719 04:37:55.744928    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 04:37:55.793798    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0719 04:37:55.794291    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 04:37:55.840911    8304 provision.go:87] duration metric: took 15.0104184s to configureAuth
	I0719 04:37:55.840911    8304 buildroot.go:189] setting minikube options for container-runtime
	I0719 04:37:55.841781    8304 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:37:55.841781    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:58.074954    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:58.074954    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:58.074954    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:00.704914    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:38:00.704914    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:00.710701    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:38:00.711453    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.161.140 22 <nil> <nil>}
	I0719 04:38:00.711453    8304 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 04:38:00.836750    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 04:38:00.836750    8304 buildroot.go:70] root file system type: tmpfs
	I0719 04:38:00.837058    8304 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 04:38:00.837153    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:38:03.032084    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:03.032215    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:03.032278    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:05.689246    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:38:05.689246    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:05.694119    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:38:05.695068    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.161.140 22 <nil> <nil>}
	I0719 04:38:05.695068    8304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.168.223"
	Environment="NO_PROXY=172.28.168.223,172.28.171.55"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 04:38:05.849044    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.168.223
	Environment=NO_PROXY=172.28.168.223,172.28.171.55
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 04:38:05.849209    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:38:08.085806    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:08.085806    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:08.086273    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:10.723830    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:38:10.723830    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:10.730087    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:38:10.730682    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.161.140 22 <nil> <nil>}
	I0719 04:38:10.730682    8304 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 04:38:12.996398    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 04:38:12.996512    8304 machine.go:97] duration metric: took 47.3341818s to provisionDockerMachine
	I0719 04:38:12.996512    8304 client.go:171] duration metric: took 2m0.7359517s to LocalClient.Create
	I0719 04:38:12.996512    8304 start.go:167] duration metric: took 2m0.7359517s to libmachine.API.Create "ha-062500"
	I0719 04:38:12.996512    8304 start.go:293] postStartSetup for "ha-062500-m03" (driver="hyperv")
	I0719 04:38:12.996714    8304 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:38:13.009645    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:38:13.009645    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:38:15.251033    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:15.251033    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:15.251314    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:17.961438    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:38:17.961742    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:17.962219    8304 sshutil.go:53] new ssh client: &{IP:172.28.161.140 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\id_rsa Username:docker}
	I0719 04:38:18.069221    8304 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0595175s)
	I0719 04:38:18.082042    8304 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:38:18.088836    8304 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 04:38:18.088920    8304 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 04:38:18.089517    8304 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 04:38:18.090817    8304 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
	I0719 04:38:18.090900    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /etc/ssl/certs/96042.pem
	I0719 04:38:18.104517    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 04:38:18.124249    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
	I0719 04:38:18.171090    8304 start.go:296] duration metric: took 5.1744s for postStartSetup
	I0719 04:38:18.173555    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:38:20.405346    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:20.405346    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:20.405346    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:23.047973    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:38:23.048169    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:23.048477    8304 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\config.json ...
	I0719 04:38:23.051731    8304 start.go:128] duration metric: took 2m10.7961321s to createHost
	I0719 04:38:23.051817    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:38:25.285354    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:25.285354    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:25.285354    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:27.930325    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:38:27.930325    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:27.937360    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:38:27.938005    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.161.140 22 <nil> <nil>}
	I0719 04:38:27.938005    8304 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 04:38:28.063354    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721363908.068956500
	
	I0719 04:38:28.063354    8304 fix.go:216] guest clock: 1721363908.068956500
	I0719 04:38:28.063354    8304 fix.go:229] Guest: 2024-07-19 04:38:28.0689565 +0000 UTC Remote: 2024-07-19 04:38:23.0518173 +0000 UTC m=+593.907235101 (delta=5.0171392s)
	I0719 04:38:28.063354    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:38:30.328771    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:30.328771    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:30.328771    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:33.027801    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:38:33.027801    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:33.034848    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:38:33.035486    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.161.140 22 <nil> <nil>}
	I0719 04:38:33.035486    8304 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721363908
	I0719 04:38:33.185397    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 04:38:28 UTC 2024
	
	I0719 04:38:33.185430    8304 fix.go:236] clock set: Fri Jul 19 04:38:28 UTC 2024
	 (err=<nil>)
	I0719 04:38:33.185430    8304 start.go:83] releasing machines lock for "ha-062500-m03", held for 2m20.9297137s
	I0719 04:38:33.185703    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:38:35.476929    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:35.477161    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:35.477227    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:38.230311    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:38:38.230311    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:38.236514    8304 out.go:177] * Found network options:
	I0719 04:38:38.242556    8304 out.go:177]   - NO_PROXY=172.28.168.223,172.28.171.55
	W0719 04:38:38.248877    8304 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 04:38:38.248877    8304 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 04:38:38.250878    8304 out.go:177]   - NO_PROXY=172.28.168.223,172.28.171.55
	W0719 04:38:38.253872    8304 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 04:38:38.253872    8304 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 04:38:38.255286    8304 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 04:38:38.255398    8304 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 04:38:38.258699    8304 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 04:38:38.258699    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:38:38.269746    8304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 04:38:38.269746    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:38:40.606197    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:40.606197    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:40.606933    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:40.614759    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:40.614759    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:40.614954    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:43.359498    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:38:43.359498    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:43.360475    8304 sshutil.go:53] new ssh client: &{IP:172.28.161.140 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\id_rsa Username:docker}
	I0719 04:38:43.385920    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:38:43.386773    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:43.387526    8304 sshutil.go:53] new ssh client: &{IP:172.28.161.140 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\id_rsa Username:docker}
	I0719 04:38:43.454475    8304 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1846697s)
	W0719 04:38:43.454475    8304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 04:38:43.466246    8304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:38:43.473292    8304 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.2145332s)
	W0719 04:38:43.473417    8304 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 04:38:43.504551    8304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 04:38:43.504551    8304 start.go:495] detecting cgroup driver to use...
	I0719 04:38:43.504551    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:38:43.553113    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 04:38:43.585639    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0719 04:38:43.593796    8304 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 04:38:43.593796    8304 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 04:38:43.611903    8304 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 04:38:43.628556    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 04:38:43.660199    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 04:38:43.696203    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 04:38:43.731386    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 04:38:43.763909    8304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:38:43.795007    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 04:38:43.828759    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 04:38:43.861372    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 04:38:43.898876    8304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:38:43.930528    8304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:38:43.961084    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:38:44.169583    8304 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 04:38:44.208988    8304 start.go:495] detecting cgroup driver to use...
	I0719 04:38:44.225381    8304 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 04:38:44.264349    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:38:44.301613    8304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:38:44.358726    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:38:44.397618    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 04:38:44.432281    8304 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 04:38:44.496076    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 04:38:44.520573    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:38:44.568753    8304 ssh_runner.go:195] Run: which cri-dockerd
	I0719 04:38:44.588944    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 04:38:44.606966    8304 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 04:38:44.657049    8304 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 04:38:44.858832    8304 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 04:38:45.051566    8304 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 04:38:45.052488    8304 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 04:38:45.096717    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:38:45.298529    8304 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 04:38:47.900869    8304 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6013568s)
	I0719 04:38:47.912673    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 04:38:47.949716    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 04:38:47.983804    8304 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 04:38:48.186483    8304 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 04:38:48.385418    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:38:48.595727    8304 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 04:38:48.635759    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 04:38:48.667764    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:38:48.880626    8304 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 04:38:48.996872    8304 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 04:38:49.008915    8304 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 04:38:49.017858    8304 start.go:563] Will wait 60s for crictl version
	I0719 04:38:49.028640    8304 ssh_runner.go:195] Run: which crictl
	I0719 04:38:49.046514    8304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 04:38:49.111006    8304 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 04:38:49.118990    8304 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 04:38:49.162268    8304 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 04:38:49.197161    8304 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0719 04:38:49.201296    8304 out.go:177]   - env NO_PROXY=172.28.168.223
	I0719 04:38:49.203915    8304 out.go:177]   - env NO_PROXY=172.28.168.223,172.28.171.55
	I0719 04:38:49.206346    8304 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0719 04:38:49.215294    8304 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0719 04:38:49.215708    8304 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0719 04:38:49.215708    8304 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0719 04:38:49.215708    8304 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:e9:18 Flags:up|broadcast|multicast|running}
	I0719 04:38:49.219359    8304 ip.go:210] interface addr: fe80::1dc5:162d:cec2:b9bd/64
	I0719 04:38:49.219382    8304 ip.go:210] interface addr: 172.28.160.1/20
	I0719 04:38:49.231133    8304 ssh_runner.go:195] Run: grep 172.28.160.1	host.minikube.internal$ /etc/hosts
	I0719 04:38:49.237974    8304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:38:49.264356    8304 mustload.go:65] Loading cluster: ha-062500
	I0719 04:38:49.265631    8304 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:38:49.266919    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:38:51.447338    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:51.447558    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:51.447558    8304 host.go:66] Checking if "ha-062500" exists ...
	I0719 04:38:51.448435    8304 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500 for IP: 172.28.161.140
	I0719 04:38:51.448435    8304 certs.go:194] generating shared ca certs ...
	I0719 04:38:51.448493    8304 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:38:51.449395    8304 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0719 04:38:51.449927    8304 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0719 04:38:51.449982    8304 certs.go:256] generating profile certs ...
	I0719 04:38:51.451093    8304 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\client.key
	I0719 04:38:51.451295    8304 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.6ecf5f11
	I0719 04:38:51.451521    8304 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.6ecf5f11 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.168.223 172.28.171.55 172.28.161.140 172.28.175.254]
	I0719 04:38:51.686772    8304 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.6ecf5f11 ...
	I0719 04:38:51.686772    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.6ecf5f11: {Name:mk966cd9b89e774069784355cc8da1117973bc8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:38:51.688645    8304 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.6ecf5f11 ...
	I0719 04:38:51.688645    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.6ecf5f11: {Name:mk3473ef6e1b5ac680e036b33607771f3b5c536e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:38:51.689467    8304 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.6ecf5f11 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt
	I0719 04:38:51.701233    8304 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.6ecf5f11 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key
	I0719 04:38:51.703293    8304 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key
	I0719 04:38:51.703293    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 04:38:51.703441    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0719 04:38:51.703441    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 04:38:51.703441    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 04:38:51.703441    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 04:38:51.704085    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 04:38:51.712298    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 04:38:51.712508    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 04:38:51.712508    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem (1338 bytes)
	W0719 04:38:51.712508    8304 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604_empty.pem, impossibly tiny 0 bytes
	I0719 04:38:51.713228    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0719 04:38:51.713628    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0719 04:38:51.713953    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0719 04:38:51.714016    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0719 04:38:51.714298    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem (1708 bytes)
	I0719 04:38:51.714298    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem -> /usr/share/ca-certificates/9604.pem
	I0719 04:38:51.714298    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /usr/share/ca-certificates/96042.pem
	I0719 04:38:51.714298    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:38:51.715291    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:38:53.922590    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:53.922590    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:53.922590    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:56.549638    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:38:56.549638    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:56.550520    8304 sshutil.go:53] new ssh client: &{IP:172.28.168.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\id_rsa Username:docker}
	I0719 04:38:56.651768    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0719 04:38:56.659638    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0719 04:38:56.691129    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0719 04:38:56.698104    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0719 04:38:56.729716    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0719 04:38:56.736877    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0719 04:38:56.769154    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0719 04:38:56.778242    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0719 04:38:56.811097    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0719 04:38:56.818057    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0719 04:38:56.848295    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0719 04:38:56.854647    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0719 04:38:56.874190    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 04:38:56.922285    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 04:38:56.966116    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 04:38:57.012779    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 04:38:57.065835    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0719 04:38:57.113320    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 04:38:57.160981    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 04:38:57.209129    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 04:38:57.258436    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem --> /usr/share/ca-certificates/9604.pem (1338 bytes)
	I0719 04:38:57.308974    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /usr/share/ca-certificates/96042.pem (1708 bytes)
	I0719 04:38:57.355022    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 04:38:57.400532    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0719 04:38:57.432231    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0719 04:38:57.461470    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0719 04:38:57.494650    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0719 04:38:57.526960    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0719 04:38:57.560812    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0719 04:38:57.595401    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0719 04:38:57.659492    8304 ssh_runner.go:195] Run: openssl version
	I0719 04:38:57.681800    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9604.pem && ln -fs /usr/share/ca-certificates/9604.pem /etc/ssl/certs/9604.pem"
	I0719 04:38:57.713786    8304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9604.pem
	I0719 04:38:57.721069    8304 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:46 /usr/share/ca-certificates/9604.pem
	I0719 04:38:57.732159    8304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9604.pem
	I0719 04:38:57.752790    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9604.pem /etc/ssl/certs/51391683.0"
	I0719 04:38:57.783730    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96042.pem && ln -fs /usr/share/ca-certificates/96042.pem /etc/ssl/certs/96042.pem"
	I0719 04:38:57.816130    8304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96042.pem
	I0719 04:38:57.823167    8304 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:46 /usr/share/ca-certificates/96042.pem
	I0719 04:38:57.834762    8304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96042.pem
	I0719 04:38:57.856488    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96042.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 04:38:57.887133    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 04:38:57.920853    8304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:38:57.928831    8304 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:38:57.939606    8304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:38:57.962288    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 04:38:57.995527    8304 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 04:38:58.001061    8304 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 04:38:58.002092    8304 kubeadm.go:934] updating node {m03 172.28.161.140 8443 v1.30.3 docker true true} ...
	I0719 04:38:58.002289    8304 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-062500-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.161.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-062500 Namespace:default APIServerHAVIP:172.28.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 04:38:58.002442    8304 kube-vip.go:115] generating kube-vip config ...
	I0719 04:38:58.015230    8304 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 04:38:58.047099    8304 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 04:38:58.047099    8304 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.175.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 04:38:58.062722    8304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 04:38:58.083989    8304 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0719 04:38:58.096545    8304 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0719 04:38:58.118660    8304 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0719 04:38:58.118660    8304 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0719 04:38:58.118660    8304 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0719 04:38:58.118660    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 04:38:58.118660    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 04:38:58.134489    8304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 04:38:58.137435    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:38:58.140121    8304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 04:38:58.142321    8304 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0719 04:38:58.143251    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0719 04:38:58.184989    8304 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0719 04:38:58.184989    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 04:38:58.184989    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0719 04:38:58.197539    8304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 04:38:58.252513    8304 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0719 04:38:58.252513    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0719 04:38:59.529561    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0719 04:38:59.548178    8304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0719 04:38:59.580235    8304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 04:38:59.612351    8304 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 04:38:59.657494    8304 ssh_runner.go:195] Run: grep 172.28.175.254	control-plane.minikube.internal$ /etc/hosts
	I0719 04:38:59.663526    8304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.175.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:38:59.699571    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:38:59.909515    8304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:38:59.942346    8304 host.go:66] Checking if "ha-062500" exists ...
	I0719 04:38:59.942346    8304 start.go:317] joinCluster: &{Name:ha-062500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-062500 Namespace:default APIServerHAVIP:172.28.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.168.223 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.171.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.28.161.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:38:59.943546    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0719 04:38:59.943763    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:39:02.157605    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:39:02.157605    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:39:02.157605    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:39:04.830211    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:39:04.830295    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:39:04.830866    8304 sshutil.go:53] new ssh client: &{IP:172.28.168.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\id_rsa Username:docker}
	I0719 04:39:05.049950    8304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1063454s)
	I0719 04:39:05.049950    8304 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.28.161.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 04:39:05.049950    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token whbjti.l37hrpvm5f3lggpj --discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-062500-m03 --control-plane --apiserver-advertise-address=172.28.161.140 --apiserver-bind-port=8443"
	I0719 04:39:49.278256    8304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token whbjti.l37hrpvm5f3lggpj --discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-062500-m03 --control-plane --apiserver-advertise-address=172.28.161.140 --apiserver-bind-port=8443": (44.2277202s)
	I0719 04:39:49.278328    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0719 04:39:50.368201    8304 ssh_runner.go:235] Completed: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet": (1.0897909s)
	I0719 04:39:50.379636    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-062500-m03 minikube.k8s.io/updated_at=2024_07_19T04_39_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-062500 minikube.k8s.io/primary=false
	I0719 04:39:50.576587    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-062500-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0719 04:39:50.738179    8304 start.go:319] duration metric: took 50.7952483s to joinCluster
	I0719 04:39:50.738179    8304 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.28.161.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 04:39:50.740276    8304 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:39:50.742006    8304 out.go:177] * Verifying Kubernetes components...
	I0719 04:39:50.756690    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:39:51.095841    8304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:39:51.124901    8304 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 04:39:51.125822    8304 kapi.go:59] client config for ha-062500: &rest.Config{Host:"https://172.28.175.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-062500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-062500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0719 04:39:51.125822    8304 kubeadm.go:483] Overriding stale ClientConfig host https://172.28.175.254:8443 with https://172.28.168.223:8443
	I0719 04:39:51.126829    8304 node_ready.go:35] waiting up to 6m0s for node "ha-062500-m03" to be "Ready" ...
	I0719 04:39:51.126829    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:51.126829    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:51.126829    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:51.126829    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:51.139862    8304 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0719 04:39:51.641304    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:51.641304    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:51.641304    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:51.641304    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:51.646338    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:39:52.131962    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:52.131962    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:52.131962    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:52.131962    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:52.139536    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:39:52.639176    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:52.639176    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:52.639176    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:52.639176    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:52.646212    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:39:53.130811    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:53.130884    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:53.130884    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:53.130929    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:53.136389    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:39:53.137781    8304 node_ready.go:53] node "ha-062500-m03" has status "Ready":"False"
	I0719 04:39:53.637534    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:53.637599    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:53.637599    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:53.637599    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:53.642076    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:39:54.131097    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:54.131097    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:54.131097    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:54.131097    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:54.136649    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:39:54.639712    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:54.639890    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:54.639890    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:54.639890    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:54.644945    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:39:55.129396    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:55.129396    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:55.129396    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:55.129396    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:55.133412    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:39:55.639624    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:55.639624    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:55.639624    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:55.639624    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:55.645227    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:39:55.646755    8304 node_ready.go:53] node "ha-062500-m03" has status "Ready":"False"
	I0719 04:39:56.128599    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:56.128599    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:56.128704    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:56.128704    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:56.133953    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:39:56.636140    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:56.636140    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:56.636247    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:56.636247    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:56.639840    8304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:39:57.128049    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:57.128091    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:57.128091    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:57.128165    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:57.135510    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:39:57.628505    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:57.628793    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:57.628793    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:57.628793    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:57.636233    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:39:58.132905    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:58.132905    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:58.132905    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:58.132905    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:58.137734    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:39:58.139217    8304 node_ready.go:53] node "ha-062500-m03" has status "Ready":"False"
	I0719 04:39:58.640440    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:58.640582    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:58.640582    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:58.640582    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:58.646167    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:39:59.127507    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:59.127571    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:59.127571    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:59.127571    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:59.133641    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:39:59.634933    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:59.634933    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:59.635028    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:59.635028    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:59.639479    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:00.134968    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:00.135114    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:00.135114    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:00.135114    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:00.139920    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:00.140755    8304 node_ready.go:53] node "ha-062500-m03" has status "Ready":"False"
	I0719 04:40:00.637454    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:00.637526    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:00.637526    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:00.637526    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:00.642173    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:01.137538    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:01.137538    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:01.137538    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:01.137538    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:01.142133    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:01.637400    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:01.637507    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:01.637507    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:01.637507    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:01.641959    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:02.139334    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:02.139464    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:02.139464    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:02.139464    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:02.144939    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:02.145528    8304 node_ready.go:53] node "ha-062500-m03" has status "Ready":"False"
	I0719 04:40:02.641970    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:02.642118    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:02.642118    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:02.642118    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:02.646555    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:03.128341    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:03.128490    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:03.128547    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:03.128547    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:03.132323    8304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:40:03.629311    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:03.629311    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:03.629311    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:03.629423    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:03.636321    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:40:04.132070    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:04.132275    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:04.132275    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:04.132275    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:04.137074    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:04.633389    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:04.633389    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:04.633389    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:04.633389    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:04.638394    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:04.639393    8304 node_ready.go:53] node "ha-062500-m03" has status "Ready":"False"
	I0719 04:40:05.137956    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:05.137956    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:05.137956    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:05.137956    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:05.145383    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:40:05.635144    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:05.635144    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:05.635144    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:05.635144    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:05.640774    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:06.136827    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:06.136878    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:06.136878    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:06.136878    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:06.141650    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:06.633644    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:06.633888    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:06.633888    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:06.633888    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:06.639156    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:06.639864    8304 node_ready.go:53] node "ha-062500-m03" has status "Ready":"False"
	I0719 04:40:07.133136    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:07.133136    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:07.133136    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:07.133136    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:07.138541    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:07.633315    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:07.633315    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:07.633315    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:07.633315    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:07.638963    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:08.132679    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:08.132788    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:08.132788    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:08.132788    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:08.137681    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:08.632870    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:08.632870    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:08.632996    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:08.632996    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:08.649964    8304 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0719 04:40:08.650751    8304 node_ready.go:53] node "ha-062500-m03" has status "Ready":"False"
	I0719 04:40:09.131421    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:09.131421    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.131517    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.131517    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.134981    8304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:40:09.137897    8304 node_ready.go:49] node "ha-062500-m03" has status "Ready":"True"
	I0719 04:40:09.138442    8304 node_ready.go:38] duration metric: took 18.0114054s for node "ha-062500-m03" to be "Ready" ...
	I0719 04:40:09.138508    8304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 04:40:09.138727    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods
	I0719 04:40:09.138757    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.138757    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.138757    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.148730    8304 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0719 04:40:09.158711    8304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jb6nt" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.158711    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jb6nt
	I0719 04:40:09.158711    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.158711    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.158711    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.175309    8304 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0719 04:40:09.176376    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:09.176439    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.176439    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.176439    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.184963    8304 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:40:09.185707    8304 pod_ready.go:92] pod "coredns-7db6d8ff4d-jb6nt" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:09.185707    8304 pod_ready.go:81] duration metric: took 26.9957ms for pod "coredns-7db6d8ff4d-jb6nt" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.185707    8304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jpmb4" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.186302    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jpmb4
	I0719 04:40:09.186302    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.186302    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.186302    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.190146    8304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:40:09.191198    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:09.191198    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.191198    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.191198    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.196134    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:09.196441    8304 pod_ready.go:92] pod "coredns-7db6d8ff4d-jpmb4" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:09.196441    8304 pod_ready.go:81] duration metric: took 10.7332ms for pod "coredns-7db6d8ff4d-jpmb4" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.196441    8304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.196441    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-062500
	I0719 04:40:09.196441    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.196441    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.196441    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.200639    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:09.201410    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:09.201410    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.201410    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.201410    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.205055    8304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:40:09.206020    8304 pod_ready.go:92] pod "etcd-ha-062500" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:09.206020    8304 pod_ready.go:81] duration metric: took 9.5793ms for pod "etcd-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.206020    8304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.206020    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-062500-m02
	I0719 04:40:09.206020    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.206020    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.206020    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.210039    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:09.210741    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:40:09.211544    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.211647    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.211647    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.215378    8304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:40:09.216189    8304 pod_ready.go:92] pod "etcd-ha-062500-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:09.216189    8304 pod_ready.go:81] duration metric: took 10.1686ms for pod "etcd-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.216189    8304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-062500-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.336183    8304 request.go:629] Waited for 119.7862ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-062500-m03
	I0719 04:40:09.336265    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-062500-m03
	I0719 04:40:09.336265    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.336265    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.336265    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.340645    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:09.541132    8304 request.go:629] Waited for 199.0002ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:09.541497    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:09.541497    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.541497    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.541497    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.546397    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:09.547855    8304 pod_ready.go:92] pod "etcd-ha-062500-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:09.547855    8304 pod_ready.go:81] duration metric: took 331.6624ms for pod "etcd-ha-062500-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.547923    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.745581    8304 request.go:629] Waited for 197.5616ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-062500
	I0719 04:40:09.745581    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-062500
	I0719 04:40:09.745581    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.745581    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.745581    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.751621    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:40:09.933226    8304 request.go:629] Waited for 180.6563ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:09.933676    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:09.933676    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.933676    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.933794    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.939016    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:09.940358    8304 pod_ready.go:92] pod "kube-apiserver-ha-062500" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:09.940465    8304 pod_ready.go:81] duration metric: took 392.5372ms for pod "kube-apiserver-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.940465    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:10.136761    8304 request.go:629] Waited for 195.6804ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-062500-m02
	I0719 04:40:10.136834    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-062500-m02
	I0719 04:40:10.136897    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:10.136897    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:10.136897    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:10.141687    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:10.341312    8304 request.go:629] Waited for 198.3379ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:40:10.341312    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:40:10.341312    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:10.341312    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:10.341312    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:10.346525    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:10.347940    8304 pod_ready.go:92] pod "kube-apiserver-ha-062500-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:10.347992    8304 pod_ready.go:81] duration metric: took 407.5219ms for pod "kube-apiserver-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:10.348041    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-062500-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:10.545529    8304 request.go:629] Waited for 197.276ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-062500-m03
	I0719 04:40:10.545649    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-062500-m03
	I0719 04:40:10.545649    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:10.545649    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:10.545866    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:10.553646    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:40:10.735379    8304 request.go:629] Waited for 180.5133ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:10.735563    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:10.735563    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:10.735563    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:10.735563    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:10.741052    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:10.742432    8304 pod_ready.go:92] pod "kube-apiserver-ha-062500-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:10.742471    8304 pod_ready.go:81] duration metric: took 394.4261ms for pod "kube-apiserver-ha-062500-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:10.742471    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:10.939057    8304 request.go:629] Waited for 196.3292ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-062500
	I0719 04:40:10.939057    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-062500
	I0719 04:40:10.939057    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:10.939057    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:10.939057    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:10.944726    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:11.144364    8304 request.go:629] Waited for 198.7815ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:11.144364    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:11.144364    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:11.144364    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:11.144364    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:11.148808    8304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:40:11.149098    8304 pod_ready.go:92] pod "kube-controller-manager-ha-062500" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:11.149098    8304 pod_ready.go:81] duration metric: took 406.6223ms for pod "kube-controller-manager-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:11.149098    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:11.346399    8304 request.go:629] Waited for 196.7667ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-062500-m02
	I0719 04:40:11.346399    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-062500-m02
	I0719 04:40:11.346848    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:11.346965    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:11.346965    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:11.351770    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:11.533560    8304 request.go:629] Waited for 179.8545ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:40:11.533765    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:40:11.533901    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:11.533901    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:11.533901    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:11.539104    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:11.541004    8304 pod_ready.go:92] pod "kube-controller-manager-ha-062500-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:11.541004    8304 pod_ready.go:81] duration metric: took 391.9013ms for pod "kube-controller-manager-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:11.541058    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-062500-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:11.737507    8304 request.go:629] Waited for 196.3797ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-062500-m03
	I0719 04:40:11.737745    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-062500-m03
	I0719 04:40:11.737745    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:11.737898    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:11.737898    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:11.745609    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:40:11.942973    8304 request.go:629] Waited for 195.7217ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:11.943227    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:11.943305    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:11.943330    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:11.943330    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:11.947548    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:11.948837    8304 pod_ready.go:92] pod "kube-controller-manager-ha-062500-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:11.948970    8304 pod_ready.go:81] duration metric: took 407.9079ms for pod "kube-controller-manager-ha-062500-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:11.949065    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g7z8c" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:12.131684    8304 request.go:629] Waited for 182.3278ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7z8c
	I0719 04:40:12.131938    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7z8c
	I0719 04:40:12.132014    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:12.132376    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:12.132376    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:12.140460    8304 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:40:12.333926    8304 request.go:629] Waited for 192.1163ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:12.334228    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:12.334228    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:12.334228    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:12.334228    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:12.338994    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:12.340574    8304 pod_ready.go:92] pod "kube-proxy-g7z8c" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:12.340574    8304 pod_ready.go:81] duration metric: took 391.5047ms for pod "kube-proxy-g7z8c" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:12.340574    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rtdgs" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:12.536974    8304 request.go:629] Waited for 196.2907ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rtdgs
	I0719 04:40:12.536974    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rtdgs
	I0719 04:40:12.537204    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:12.537228    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:12.537228    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:12.542792    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:12.739958    8304 request.go:629] Waited for 195.5941ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:40:12.740099    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:40:12.740099    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:12.740099    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:12.740099    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:12.743715    8304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:40:12.745308    8304 pod_ready.go:92] pod "kube-proxy-rtdgs" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:12.745308    8304 pod_ready.go:81] duration metric: took 404.7298ms for pod "kube-proxy-rtdgs" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:12.745308    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wv8bn" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:12.943698    8304 request.go:629] Waited for 198.3873ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wv8bn
	I0719 04:40:12.944006    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wv8bn
	I0719 04:40:12.944006    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:12.944059    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:12.944076    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:12.948347    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:13.132346    8304 request.go:629] Waited for 181.6142ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:13.132613    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:13.132613    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:13.132684    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:13.132684    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:13.136808    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:13.137808    8304 pod_ready.go:92] pod "kube-proxy-wv8bn" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:13.137871    8304 pod_ready.go:81] duration metric: took 392.5579ms for pod "kube-proxy-wv8bn" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:13.137871    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:13.336477    8304 request.go:629] Waited for 198.4057ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-062500
	I0719 04:40:13.336669    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-062500
	I0719 04:40:13.336817    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:13.336817    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:13.336817    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:13.341759    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:13.539783    8304 request.go:629] Waited for 197.7434ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:13.539783    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:13.539783    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:13.539783    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:13.539783    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:13.545083    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:13.546724    8304 pod_ready.go:92] pod "kube-scheduler-ha-062500" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:13.546724    8304 pod_ready.go:81] duration metric: took 408.849ms for pod "kube-scheduler-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:13.546724    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:13.742005    8304 request.go:629] Waited for 194.9275ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-062500-m02
	I0719 04:40:13.742156    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-062500-m02
	I0719 04:40:13.742156    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:13.742156    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:13.742156    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:13.747811    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:13.944527    8304 request.go:629] Waited for 195.9958ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:40:13.944702    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:40:13.944702    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:13.944702    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:13.944702    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:13.951465    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:40:13.953931    8304 pod_ready.go:92] pod "kube-scheduler-ha-062500-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:13.953931    8304 pod_ready.go:81] duration metric: took 407.202ms for pod "kube-scheduler-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:13.953931    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-062500-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:14.133774    8304 request.go:629] Waited for 179.4456ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-062500-m03
	I0719 04:40:14.133932    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-062500-m03
	I0719 04:40:14.133932    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:14.133998    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:14.133998    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:14.141811    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:40:14.335021    8304 request.go:629] Waited for 191.3587ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:14.335617    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:14.335617    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:14.335617    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:14.335617    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:14.340533    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:14.341649    8304 pod_ready.go:92] pod "kube-scheduler-ha-062500-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:14.341649    8304 pod_ready.go:81] duration metric: took 387.6486ms for pod "kube-scheduler-ha-062500-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:14.341717    8304 pod_ready.go:38] duration metric: took 5.2030838s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 04:40:14.341717    8304 api_server.go:52] waiting for apiserver process to appear ...
	I0719 04:40:14.353430    8304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:40:14.382324    8304 api_server.go:72] duration metric: took 23.6427717s to wait for apiserver process to appear ...
	I0719 04:40:14.382324    8304 api_server.go:88] waiting for apiserver healthz status ...
	I0719 04:40:14.382422    8304 api_server.go:253] Checking apiserver healthz at https://172.28.168.223:8443/healthz ...
	I0719 04:40:14.398288    8304 api_server.go:279] https://172.28.168.223:8443/healthz returned 200:
	ok
	I0719 04:40:14.398288    8304 round_trippers.go:463] GET https://172.28.168.223:8443/version
	I0719 04:40:14.398288    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:14.398288    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:14.398288    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:14.399133    8304 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 04:40:14.400262    8304 api_server.go:141] control plane version: v1.30.3
	I0719 04:40:14.400341    8304 api_server.go:131] duration metric: took 18.0169ms to wait for apiserver health ...
	I0719 04:40:14.400341    8304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 04:40:14.539806    8304 request.go:629] Waited for 139.1576ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods
	I0719 04:40:14.539806    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods
	I0719 04:40:14.540016    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:14.540016    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:14.540016    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:14.549336    8304 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0719 04:40:14.560637    8304 system_pods.go:59] 24 kube-system pods found
	I0719 04:40:14.560637    8304 system_pods.go:61] "coredns-7db6d8ff4d-jb6nt" [799dd902-ac1e-4264-91b3-18bdfcd3c8d6] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "coredns-7db6d8ff4d-jpmb4" [f08afb24-1862-49cd-9065-fd21c96614ca] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "etcd-ha-062500" [7fcd86be-7022-4c7c-8144-e2537879c108] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "etcd-ha-062500-m02" [d7896def-bce8-4197-8016-90a7e745f68c] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "etcd-ha-062500-m03" [f90e665c-fb9e-48b1-abcc-dc990ca0a31b] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kindnet-g9b42" [7c244eed-a81b-4088-adfa-bcdccd3cb4f0] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kindnet-sk9jr" [06a7499a-0467-433d-9e65-5352dec711cf] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kindnet-xw86l" [8513df89-57a9-4e7a-b30f-df6c7ef5ed58] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-apiserver-ha-062500" [495cdc56-2af6-4ceb-acee-26b9bc09d268] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-apiserver-ha-062500-m02" [f880cb8b-d5aa-4141-8031-26951f630b43] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-apiserver-ha-062500-m03" [29968640-2d8b-4694-8b0a-d6cfaaa20cdc] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-controller-manager-ha-062500" [72ca647c-6a15-4408-9bc7-ba1be775d35a] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-controller-manager-ha-062500-m02" [031f15e6-c214-44e4-88f7-f7636f1f4a5e] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-controller-manager-ha-062500-m03" [33115099-6fd3-4486-a359-ab11c68c4f0e] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-proxy-g7z8c" [a8637650-ff75-4192-90ec-acfc39f14a7f] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-proxy-rtdgs" [5c014afc-3ab0-4d20-83b6-adbb9a6133ec] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-proxy-wv8bn" [75f8ca14-0f7c-4e85-884c-b55161236c22] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-scheduler-ha-062500" [bc127693-7c90-4778-bef4-a9aa231e89a8] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-scheduler-ha-062500-m02" [37551193-9128-4afd-9653-1639d1727249] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-scheduler-ha-062500-m03" [01ce36f4-8c3e-4bd7-aa4f-230aa4273049] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-vip-ha-062500" [87843ee5-6fdf-473a-8818-47b1927340d6] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-vip-ha-062500-m02" [8ce744ae-1492-4359-860f-f7ff13977733] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-vip-ha-062500-m03" [30925675-f944-440d-a0b5-a8356bd0297b] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "storage-provisioner" [d029a307-143b-4ef5-8619-f06e267d756c] Running
	I0719 04:40:14.560637    8304 system_pods.go:74] duration metric: took 160.2936ms to wait for pod list to return data ...
	I0719 04:40:14.560637    8304 default_sa.go:34] waiting for default service account to be created ...
	I0719 04:40:14.742154    8304 request.go:629] Waited for 181.515ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/default/serviceaccounts
	I0719 04:40:14.742154    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/default/serviceaccounts
	I0719 04:40:14.742154    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:14.742154    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:14.742154    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:14.746655    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:14.747615    8304 default_sa.go:45] found service account: "default"
	I0719 04:40:14.747615    8304 default_sa.go:55] duration metric: took 186.9759ms for default service account to be created ...
	I0719 04:40:14.747677    8304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 04:40:14.944354    8304 request.go:629] Waited for 196.3359ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods
	I0719 04:40:14.944410    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods
	I0719 04:40:14.944410    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:14.944410    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:14.944410    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:14.954449    8304 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0719 04:40:14.965143    8304 system_pods.go:86] 24 kube-system pods found
	I0719 04:40:14.965143    8304 system_pods.go:89] "coredns-7db6d8ff4d-jb6nt" [799dd902-ac1e-4264-91b3-18bdfcd3c8d6] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "coredns-7db6d8ff4d-jpmb4" [f08afb24-1862-49cd-9065-fd21c96614ca] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "etcd-ha-062500" [7fcd86be-7022-4c7c-8144-e2537879c108] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "etcd-ha-062500-m02" [d7896def-bce8-4197-8016-90a7e745f68c] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "etcd-ha-062500-m03" [f90e665c-fb9e-48b1-abcc-dc990ca0a31b] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "kindnet-g9b42" [7c244eed-a81b-4088-adfa-bcdccd3cb4f0] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "kindnet-sk9jr" [06a7499a-0467-433d-9e65-5352dec711cf] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "kindnet-xw86l" [8513df89-57a9-4e7a-b30f-df6c7ef5ed58] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "kube-apiserver-ha-062500" [495cdc56-2af6-4ceb-acee-26b9bc09d268] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "kube-apiserver-ha-062500-m02" [f880cb8b-d5aa-4141-8031-26951f630b43] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "kube-apiserver-ha-062500-m03" [29968640-2d8b-4694-8b0a-d6cfaaa20cdc] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "kube-controller-manager-ha-062500" [72ca647c-6a15-4408-9bc7-ba1be775d35a] Running
	I0719 04:40:14.965709    8304 system_pods.go:89] "kube-controller-manager-ha-062500-m02" [031f15e6-c214-44e4-88f7-f7636f1f4a5e] Running
	I0719 04:40:14.965709    8304 system_pods.go:89] "kube-controller-manager-ha-062500-m03" [33115099-6fd3-4486-a359-ab11c68c4f0e] Running
	I0719 04:40:14.965709    8304 system_pods.go:89] "kube-proxy-g7z8c" [a8637650-ff75-4192-90ec-acfc39f14a7f] Running
	I0719 04:40:14.965709    8304 system_pods.go:89] "kube-proxy-rtdgs" [5c014afc-3ab0-4d20-83b6-adbb9a6133ec] Running
	I0719 04:40:14.965709    8304 system_pods.go:89] "kube-proxy-wv8bn" [75f8ca14-0f7c-4e85-884c-b55161236c22] Running
	I0719 04:40:14.965709    8304 system_pods.go:89] "kube-scheduler-ha-062500" [bc127693-7c90-4778-bef4-a9aa231e89a8] Running
	I0719 04:40:14.965784    8304 system_pods.go:89] "kube-scheduler-ha-062500-m02" [37551193-9128-4afd-9653-1639d1727249] Running
	I0719 04:40:14.965784    8304 system_pods.go:89] "kube-scheduler-ha-062500-m03" [01ce36f4-8c3e-4bd7-aa4f-230aa4273049] Running
	I0719 04:40:14.965825    8304 system_pods.go:89] "kube-vip-ha-062500" [87843ee5-6fdf-473a-8818-47b1927340d6] Running
	I0719 04:40:14.965825    8304 system_pods.go:89] "kube-vip-ha-062500-m02" [8ce744ae-1492-4359-860f-f7ff13977733] Running
	I0719 04:40:14.965825    8304 system_pods.go:89] "kube-vip-ha-062500-m03" [30925675-f944-440d-a0b5-a8356bd0297b] Running
	I0719 04:40:14.965825    8304 system_pods.go:89] "storage-provisioner" [d029a307-143b-4ef5-8619-f06e267d756c] Running
	I0719 04:40:14.965865    8304 system_pods.go:126] duration metric: took 218.1658ms to wait for k8s-apps to be running ...
	I0719 04:40:14.965865    8304 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 04:40:14.976417    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:40:15.002477    8304 system_svc.go:56] duration metric: took 36.6113ms WaitForService to wait for kubelet
	I0719 04:40:15.003201    8304 kubeadm.go:582] duration metric: took 24.2636411s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:40:15.003201    8304 node_conditions.go:102] verifying NodePressure condition ...
	I0719 04:40:15.131876    8304 request.go:629] Waited for 128.4836ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes
	I0719 04:40:15.131876    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes
	I0719 04:40:15.131876    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:15.131876    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:15.131876    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:15.136693    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:15.137965    8304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 04:40:15.137965    8304 node_conditions.go:123] node cpu capacity is 2
	I0719 04:40:15.137965    8304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 04:40:15.137965    8304 node_conditions.go:123] node cpu capacity is 2
	I0719 04:40:15.137965    8304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 04:40:15.137965    8304 node_conditions.go:123] node cpu capacity is 2
	I0719 04:40:15.137965    8304 node_conditions.go:105] duration metric: took 134.763ms to run NodePressure ...
	I0719 04:40:15.137965    8304 start.go:241] waiting for startup goroutines ...
	I0719 04:40:15.137965    8304 start.go:255] writing updated cluster config ...
	I0719 04:40:15.150405    8304 ssh_runner.go:195] Run: rm -f paused
	I0719 04:40:15.297399    8304 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 04:40:15.300922    8304 out.go:177] * Done! kubectl is now configured to use "ha-062500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 19 04:32:14 ha-062500 cri-dockerd[1332]: time="2024-07-19T04:32:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/21e0472b810d2894b3251cdc11420cc80a585b2140cacb54c1721668a1a2c4d4/resolv.conf as [nameserver 172.28.160.1]"
	Jul 19 04:32:14 ha-062500 cri-dockerd[1332]: time="2024-07-19T04:32:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3cbe60a98d0a5f93ec9d91c28416a1e582614cd64562ebdb5222ed2c5b346786/resolv.conf as [nameserver 172.28.160.1]"
	Jul 19 04:32:14 ha-062500 cri-dockerd[1332]: time="2024-07-19T04:32:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/19818c3d9e967acec6697c474831ffd5e6f5d7e1e8a807b73819c3349b0972c6/resolv.conf as [nameserver 172.28.160.1]"
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.425926092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.426076893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.426119293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.426755698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.638377978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.638624179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.638800281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.639111783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.661004346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.661248948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.661370249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.663032562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:40:55 ha-062500 dockerd[1439]: time="2024-07-19T04:40:55.011712580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 04:40:55 ha-062500 dockerd[1439]: time="2024-07-19T04:40:55.011991185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 04:40:55 ha-062500 dockerd[1439]: time="2024-07-19T04:40:55.012015185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:40:55 ha-062500 dockerd[1439]: time="2024-07-19T04:40:55.012173588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:40:55 ha-062500 cri-dockerd[1332]: time="2024-07-19T04:40:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d133268d9d7a083392c792d8717340f916c2e67fdfd99b4ec0c35d377ec662c5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 19 04:40:56 ha-062500 cri-dockerd[1332]: time="2024-07-19T04:40:56Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 19 04:40:56 ha-062500 dockerd[1439]: time="2024-07-19T04:40:56.871802642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 04:40:56 ha-062500 dockerd[1439]: time="2024-07-19T04:40:56.871911543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 04:40:56 ha-062500 dockerd[1439]: time="2024-07-19T04:40:56.871948344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:40:56 ha-062500 dockerd[1439]: time="2024-07-19T04:40:56.872735553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	02a0ee65995f3       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   d133268d9d7a0       busybox-fc5497c4f-drzm5
	d25c4a2b3eb6f       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   19818c3d9e967       coredns-7db6d8ff4d-jpmb4
	8f2c7b9cacfa2       cbb01a7bd410d                                                                                         9 minutes ago        Running             coredns                   0                   3cbe60a98d0a5       coredns-7db6d8ff4d-jb6nt
	0ad384904d3a4       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   21e0472b810d2       storage-provisioner
	1ecc3bacfa9d8       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              10 minutes ago       Running             kindnet-cni               0                   22b7077a7b107       kindnet-sk9jr
	a00b203469643       55bb025d2cfa5                                                                                         10 minutes ago       Running             kube-proxy                0                   b2a69508a441d       kube-proxy-wv8bn
	3042d34fba992       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     10 minutes ago       Running             kube-vip                  0                   b3a39c0b82e5c       kube-vip-ha-062500
	3db2de00e2413       76932a3b37d7e                                                                                         10 minutes ago       Running             kube-controller-manager   0                   0ae49e148da79       kube-controller-manager-ha-062500
	6f24d8e2a5f0e       3edc18e7b7672                                                                                         10 minutes ago       Running             kube-scheduler            0                   ed5864d311f88       kube-scheduler-ha-062500
	79a4c71c9c9aa       3861cfcd7c04c                                                                                         10 minutes ago       Running             etcd                      0                   b0494ed88daad       etcd-ha-062500
	0e6e869de2f3d       1f6d574d502f3                                                                                         10 minutes ago       Running             kube-apiserver            0                   14a6f4c293f91       kube-apiserver-ha-062500
	
	
	==> coredns [8f2c7b9cacfa] <==
	[INFO] 10.244.1.2:58487 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000577s
	[INFO] 10.244.0.4:60846 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000231303s
	[INFO] 10.244.0.4:38114 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.049382123s
	[INFO] 10.244.0.4:39370 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037775577s
	[INFO] 10.244.0.4:53688 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000204503s
	[INFO] 10.244.2.2:50790 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000258703s
	[INFO] 10.244.2.2:52577 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000143302s
	[INFO] 10.244.2.2:57827 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000182703s
	[INFO] 10.244.1.2:32821 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113201s
	[INFO] 10.244.1.2:60333 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000087501s
	[INFO] 10.244.1.2:45200 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000060301s
	[INFO] 10.244.1.2:52936 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133902s
	[INFO] 10.244.1.2:34981 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000601s
	[INFO] 10.244.1.2:33642 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000162302s
	[INFO] 10.244.0.4:46012 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000252303s
	[INFO] 10.244.0.4:48739 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000169203s
	[INFO] 10.244.2.2:54941 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071501s
	[INFO] 10.244.1.2:58693 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000210203s
	[INFO] 10.244.0.4:33639 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124502s
	[INFO] 10.244.0.4:44098 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000236403s
	[INFO] 10.244.0.4:52780 - 5 "PTR IN 1.160.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000213903s
	[INFO] 10.244.2.2:40272 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000246703s
	[INFO] 10.244.2.2:45577 - 5 "PTR IN 1.160.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000689s
	[INFO] 10.244.1.2:55202 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186003s
	[INFO] 10.244.1.2:45696 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000066801s
	
	
	==> coredns [d25c4a2b3eb6] <==
	[INFO] 10.244.1.2:39548 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000074201s
	[INFO] 10.244.0.4:34254 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000307804s
	[INFO] 10.244.0.4:47466 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000140002s
	[INFO] 10.244.0.4:37327 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139201s
	[INFO] 10.244.0.4:58603 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000173802s
	[INFO] 10.244.2.2:38729 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000290604s
	[INFO] 10.244.2.2:56481 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013987377s
	[INFO] 10.244.2.2:58013 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093701s
	[INFO] 10.244.2.2:44021 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059901s
	[INFO] 10.244.2.2:46521 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063101s
	[INFO] 10.244.1.2:54966 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121002s
	[INFO] 10.244.1.2:51310 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000155402s
	[INFO] 10.244.0.4:50059 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000461406s
	[INFO] 10.244.0.4:46661 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153402s
	[INFO] 10.244.2.2:60745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000245303s
	[INFO] 10.244.2.2:34262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101102s
	[INFO] 10.244.2.2:41051 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061701s
	[INFO] 10.244.1.2:54731 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104401s
	[INFO] 10.244.1.2:45398 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065401s
	[INFO] 10.244.1.2:33483 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076401s
	[INFO] 10.244.0.4:58311 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210603s
	[INFO] 10.244.2.2:49862 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121602s
	[INFO] 10.244.2.2:59396 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112702s
	[INFO] 10.244.1.2:35847 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154302s
	[INFO] 10.244.1.2:43744 - 5 "PTR IN 1.160.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000288804s
	
	
	==> describe nodes <==
	Name:               ha-062500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-062500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-062500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T04_31_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:31:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-062500
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:41:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:41:12 +0000   Fri, 19 Jul 2024 04:31:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:41:12 +0000   Fri, 19 Jul 2024 04:31:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:41:12 +0000   Fri, 19 Jul 2024 04:31:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:41:12 +0000   Fri, 19 Jul 2024 04:32:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.168.223
	  Hostname:    ha-062500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c1957180cbd7409b81ce8f16833129c1
	  System UUID:                0a9deb14-3d0a-ab4a-9249-9dea7abfc63c
	  Boot ID:                    aa43ec6b-25e1-4a68-98a1-a8571e0c507b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-drzm5              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 coredns-7db6d8ff4d-jb6nt             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 coredns-7db6d8ff4d-jpmb4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-ha-062500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-sk9jr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-ha-062500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-ha-062500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-wv8bn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-ha-062500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-vip-ha-062500                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 10m    kube-proxy       
	  Normal  Starting                 10m    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  10m    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m    kubelet          Node ha-062500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m    kubelet          Node ha-062500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m    kubelet          Node ha-062500 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m    node-controller  Node ha-062500 event: Registered Node ha-062500 in Controller
	  Normal  NodeReady                9m47s  kubelet          Node ha-062500 status is now: NodeReady
	  Normal  RegisteredNode           6m2s   node-controller  Node ha-062500 event: Registered Node ha-062500 in Controller
	  Normal  RegisteredNode           116s   node-controller  Node ha-062500 event: Registered Node ha-062500 in Controller
	
	
	Name:               ha-062500-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-062500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-062500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T04_35_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:35:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-062500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:41:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:41:15 +0000   Fri, 19 Jul 2024 04:35:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:41:15 +0000   Fri, 19 Jul 2024 04:35:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:41:15 +0000   Fri, 19 Jul 2024 04:35:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:41:15 +0000   Fri, 19 Jul 2024 04:36:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.171.55
	  Hostname:    ha-062500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 8bd346714ac04d668146c1d818a78db6
	  System UUID:                9ad08af1-7bc2-f947-8ee2-852efca451b0
	  Boot ID:                    6200c2db-11c3-42fc-af25-30a73ef010cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nkb7m                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 etcd-ha-062500-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m18s
	  kube-system                 kindnet-xw86l                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m22s
	  kube-system                 kube-apiserver-ha-062500-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m19s
	  kube-system                 kube-controller-manager-ha-062500-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	  kube-system                 kube-proxy-rtdgs                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-scheduler-ha-062500-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m18s
	  kube-system                 kube-vip-ha-062500-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m17s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m22s (x8 over 6m22s)  kubelet          Node ha-062500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m22s (x8 over 6m22s)  kubelet          Node ha-062500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m22s (x7 over 6m22s)  kubelet          Node ha-062500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m17s                  node-controller  Node ha-062500-m02 event: Registered Node ha-062500-m02 in Controller
	  Normal  RegisteredNode           6m2s                   node-controller  Node ha-062500-m02 event: Registered Node ha-062500-m02 in Controller
	  Normal  RegisteredNode           116s                   node-controller  Node ha-062500-m02 event: Registered Node ha-062500-m02 in Controller
	
	
	Name:               ha-062500-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-062500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-062500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T04_39_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:39:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-062500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:41:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:41:13 +0000   Fri, 19 Jul 2024 04:39:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:41:13 +0000   Fri, 19 Jul 2024 04:39:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:41:13 +0000   Fri, 19 Jul 2024 04:39:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:41:13 +0000   Fri, 19 Jul 2024 04:40:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.161.140
	  Hostname:    ha-062500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 fbe6ed026070475789efdbc989b15461
	  System UUID:                8b9415f4-8533-c247-b849-f016d659a93f
	  Boot ID:                    cdda7d84-08be-4b13-b094-0338e83dcd8c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-njwwk                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	  kube-system                 etcd-ha-062500-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m18s
	  kube-system                 kindnet-g9b42                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m18s
	  kube-system                 kube-apiserver-ha-062500-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-controller-manager-ha-062500-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-proxy-g7z8c                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-scheduler-ha-062500-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	  kube-system                 kube-vip-ha-062500-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m13s                  kube-proxy       
	  Normal  Starting                 2m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m19s (x2 over 2m19s)  kubelet          Node ha-062500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m19s (x2 over 2m19s)  kubelet          Node ha-062500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m19s (x2 over 2m19s)  kubelet          Node ha-062500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m17s                  node-controller  Node ha-062500-m03 event: Registered Node ha-062500-m03 in Controller
	  Normal  RegisteredNode           2m17s                  node-controller  Node ha-062500-m03 event: Registered Node ha-062500-m03 in Controller
	  Normal  RegisteredNode           116s                   node-controller  Node ha-062500-m03 event: Registered Node ha-062500-m03 in Controller
	  Normal  NodeReady                111s                   kubelet          Node ha-062500-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.853333] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul19 04:30] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.174564] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[Jul19 04:31] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[  +0.144036] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.566311] systemd-fstab-generator[1048]: Ignoring "noauto" option for root device
	[  +0.200864] systemd-fstab-generator[1060]: Ignoring "noauto" option for root device
	[  +0.239060] systemd-fstab-generator[1074]: Ignoring "noauto" option for root device
	[  +2.870819] systemd-fstab-generator[1286]: Ignoring "noauto" option for root device
	[  +0.216598] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.209926] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.273726] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[ +11.145760] systemd-fstab-generator[1425]: Ignoring "noauto" option for root device
	[  +0.113590] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.861233] systemd-fstab-generator[1674]: Ignoring "noauto" option for root device
	[  +7.010433] systemd-fstab-generator[1873]: Ignoring "noauto" option for root device
	[  +0.107333] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.623229] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.908073] systemd-fstab-generator[2368]: Ignoring "noauto" option for root device
	[ +13.977150] kauditd_printk_skb: 12 callbacks suppressed
	[Jul19 04:32] kauditd_printk_skb: 29 callbacks suppressed
	[Jul19 04:35] kauditd_printk_skb: 26 callbacks suppressed
	[Jul19 04:39] hrtimer: interrupt took 1159110 ns
	
	
	==> etcd [79a4c71c9c9a] <==
	{"level":"info","ts":"2024-07-19T04:39:45.895875Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"348a69c43044ac55"}
	{"level":"info","ts":"2024-07-19T04:39:45.90188Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"1d3a7267867daff4","remote-peer-id":"348a69c43044ac55"}
	{"level":"info","ts":"2024-07-19T04:39:45.914285Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"1d3a7267867daff4","remote-peer-id":"348a69c43044ac55"}
	{"level":"info","ts":"2024-07-19T04:39:45.937223Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1d3a7267867daff4","to":"348a69c43044ac55","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-19T04:39:45.93738Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"1d3a7267867daff4","remote-peer-id":"348a69c43044ac55"}
	{"level":"info","ts":"2024-07-19T04:39:46.012674Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"1d3a7267867daff4","to":"348a69c43044ac55","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-19T04:39:46.013155Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"1d3a7267867daff4","remote-peer-id":"348a69c43044ac55"}
	{"level":"warn","ts":"2024-07-19T04:39:46.548744Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"348a69c43044ac55","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-07-19T04:39:47.548702Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"348a69c43044ac55","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-07-19T04:39:47.765731Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.248741ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-g7z8c\" ","response":"range_response_count:1 size:4691"}
	{"level":"info","ts":"2024-07-19T04:39:47.76587Z","caller":"traceutil/trace.go:171","msg":"trace[732603916] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-g7z8c; range_end:; response_count:1; response_revision:1529; }","duration":"105.571543ms","start":"2024-07-19T04:39:47.660283Z","end":"2024-07-19T04:39:47.765855Z","steps":["trace[732603916] 'agreement among raft nodes before linearized reading'  (duration: 88.537891ms)","trace[732603916] 'range keys from in-memory index tree'  (duration: 16.76865ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T04:39:48.06818Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"400f09ee3b5e5130","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"19.674863ms"}
	{"level":"warn","ts":"2024-07-19T04:39:48.068411Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"348a69c43044ac55","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"19.906565ms"}
	{"level":"info","ts":"2024-07-19T04:39:48.069449Z","caller":"traceutil/trace.go:171","msg":"trace[1989668754] transaction","detail":"{read_only:false; response_revision:1532; number_of_response:1; }","duration":"201.686802ms","start":"2024-07-19T04:39:47.867745Z","end":"2024-07-19T04:39:48.069432Z","steps":["trace[1989668754] 'process raft request'  (duration: 201.4573ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T04:39:49.069217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"1d3a7267867daff4 switched to configuration voters=(2106121564712710132 3785954728102636629 4615919061880951088)"}
	{"level":"info","ts":"2024-07-19T04:39:49.069371Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"a4ea440a780a51c9","local-member-id":"1d3a7267867daff4"}
	{"level":"info","ts":"2024-07-19T04:39:49.069459Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"1d3a7267867daff4","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"348a69c43044ac55"}
	{"level":"info","ts":"2024-07-19T04:39:55.653554Z","caller":"traceutil/trace.go:171","msg":"trace[484949] transaction","detail":"{read_only:false; response_revision:1560; number_of_response:1; }","duration":"104.978338ms","start":"2024-07-19T04:39:55.548555Z","end":"2024-07-19T04:39:55.653533Z","steps":["trace[484949] 'process raft request'  (duration: 104.783536ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T04:39:56.810915Z","caller":"traceutil/trace.go:171","msg":"trace[688939202] transaction","detail":"{read_only:false; response_revision:1563; number_of_response:1; }","duration":"138.467737ms","start":"2024-07-19T04:39:56.67243Z","end":"2024-07-19T04:39:56.810897Z","steps":["trace[688939202] 'process raft request'  (duration: 138.369237ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T04:39:57.324126Z","caller":"traceutil/trace.go:171","msg":"trace[721544762] transaction","detail":"{read_only:false; response_revision:1564; number_of_response:1; }","duration":"126.283328ms","start":"2024-07-19T04:39:57.197825Z","end":"2024-07-19T04:39:57.324108Z","steps":["trace[721544762] 'process raft request'  (duration: 126.113027ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T04:40:54.292117Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.983606ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-fc5497c4f-njwwk\" ","response":"range_response_count:1 size:2184"}
	{"level":"info","ts":"2024-07-19T04:40:54.292285Z","caller":"traceutil/trace.go:171","msg":"trace[255784959] range","detail":"{range_begin:/registry/pods/default/busybox-fc5497c4f-njwwk; range_end:; response_count:1; response_revision:1737; }","duration":"106.17431ms","start":"2024-07-19T04:40:54.186097Z","end":"2024-07-19T04:40:54.292271Z","steps":["trace[255784959] 'agreement among raft nodes before linearized reading'  (duration: 85.638859ms)","trace[255784959] 'range keys from in-memory index tree'  (duration: 20.251345ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T04:41:33.332352Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1033}
	{"level":"info","ts":"2024-07-19T04:41:33.478027Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1033,"took":"140.657953ms","hash":1575251545,"current-db-size-bytes":3526656,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":2072576,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-07-19T04:41:33.478228Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1575251545,"revision":1033,"compact-revision":-1}
	
	
	==> kernel <==
	 04:42:01 up 12 min,  0 users,  load average: 0.43, 0.39, 0.25
	Linux ha-062500 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1ecc3bacfa9d] <==
	I0719 04:41:12.679415       1 main.go:326] Node ha-062500-m02 has CIDR [10.244.1.0/24] 
	I0719 04:41:22.675601       1 main.go:299] Handling node with IPs: map[172.28.168.223:{}]
	I0719 04:41:22.675701       1 main.go:303] handling current node
	I0719 04:41:22.675721       1 main.go:299] Handling node with IPs: map[172.28.171.55:{}]
	I0719 04:41:22.675729       1 main.go:326] Node ha-062500-m02 has CIDR [10.244.1.0/24] 
	I0719 04:41:22.676137       1 main.go:299] Handling node with IPs: map[172.28.161.140:{}]
	I0719 04:41:22.676224       1 main.go:326] Node ha-062500-m03 has CIDR [10.244.2.0/24] 
	I0719 04:41:32.678632       1 main.go:299] Handling node with IPs: map[172.28.168.223:{}]
	I0719 04:41:32.678721       1 main.go:303] handling current node
	I0719 04:41:32.678741       1 main.go:299] Handling node with IPs: map[172.28.171.55:{}]
	I0719 04:41:32.678750       1 main.go:326] Node ha-062500-m02 has CIDR [10.244.1.0/24] 
	I0719 04:41:32.678877       1 main.go:299] Handling node with IPs: map[172.28.161.140:{}]
	I0719 04:41:32.678890       1 main.go:326] Node ha-062500-m03 has CIDR [10.244.2.0/24] 
	I0719 04:41:42.681643       1 main.go:299] Handling node with IPs: map[172.28.168.223:{}]
	I0719 04:41:42.681750       1 main.go:303] handling current node
	I0719 04:41:42.681773       1 main.go:299] Handling node with IPs: map[172.28.171.55:{}]
	I0719 04:41:42.681892       1 main.go:326] Node ha-062500-m02 has CIDR [10.244.1.0/24] 
	I0719 04:41:42.682172       1 main.go:299] Handling node with IPs: map[172.28.161.140:{}]
	I0719 04:41:42.682359       1 main.go:326] Node ha-062500-m03 has CIDR [10.244.2.0/24] 
	I0719 04:41:52.674745       1 main.go:299] Handling node with IPs: map[172.28.168.223:{}]
	I0719 04:41:52.674909       1 main.go:303] handling current node
	I0719 04:41:52.674980       1 main.go:299] Handling node with IPs: map[172.28.171.55:{}]
	I0719 04:41:52.675110       1 main.go:326] Node ha-062500-m02 has CIDR [10.244.1.0/24] 
	I0719 04:41:52.675694       1 main.go:299] Handling node with IPs: map[172.28.161.140:{}]
	I0719 04:41:52.675784       1 main.go:326] Node ha-062500-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [0e6e869de2f3] <==
	I0719 04:31:39.315328       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 04:31:40.848019       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 04:31:40.895909       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0719 04:31:40.916525       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 04:31:53.551949       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0719 04:31:53.751821       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0719 04:39:42.777545       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0719 04:39:42.777755       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0719 04:39:42.777909       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.3µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0719 04:39:42.779279       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0719 04:39:42.779525       1 timeout.go:142] post-timeout activity - time-elapsed: 2.133619ms, POST "/api/v1/namespaces/kube-system/pods" result: <nil>
	E0719 04:41:00.301699       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59767: use of closed network connection
	E0719 04:41:01.944408       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59769: use of closed network connection
	E0719 04:41:02.486097       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59771: use of closed network connection
	E0719 04:41:03.105744       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59773: use of closed network connection
	E0719 04:41:03.687783       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59775: use of closed network connection
	E0719 04:41:04.216101       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59777: use of closed network connection
	E0719 04:41:04.746218       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59779: use of closed network connection
	E0719 04:41:05.289713       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59781: use of closed network connection
	E0719 04:41:05.837073       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59783: use of closed network connection
	E0719 04:41:06.784764       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59786: use of closed network connection
	E0719 04:41:17.298049       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59788: use of closed network connection
	E0719 04:41:17.836088       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59791: use of closed network connection
	E0719 04:41:28.844026       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59797: use of closed network connection
	E0719 04:41:39.380033       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59799: use of closed network connection
	
	
	==> kube-controller-manager [3db2de00e241] <==
	I0719 04:32:17.967732       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0719 04:35:38.412700       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-062500-m02\" does not exist"
	I0719 04:35:38.461148       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-062500-m02" podCIDRs=["10.244.1.0/24"]
	I0719 04:35:43.009700       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-062500-m02"
	I0719 04:39:41.936731       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-062500-m03\" does not exist"
	I0719 04:39:41.994693       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-062500-m03" podCIDRs=["10.244.2.0/24"]
	I0719 04:39:43.112190       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-062500-m03"
	I0719 04:40:54.217837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="177.12062ms"
	I0719 04:40:54.300933       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.980514ms"
	I0719 04:40:54.301042       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.301µs"
	I0719 04:40:54.341695       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.701µs"
	I0719 04:40:54.344256       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.6µs"
	I0719 04:40:54.344901       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.501µs"
	I0719 04:40:54.635603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="257.597191ms"
	I0719 04:40:54.909957       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="274.205974ms"
	I0719 04:40:54.986548       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.132998ms"
	I0719 04:40:54.986879       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="160.203µs"
	I0719 04:40:55.139086       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.503523ms"
	I0719 04:40:55.139641       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.601µs"
	I0719 04:40:57.386259       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.050841ms"
	I0719 04:40:57.387217       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="198.502µs"
	I0719 04:40:57.603333       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.839989ms"
	I0719 04:40:57.603480       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.601µs"
	I0719 04:40:57.744462       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.834104ms"
	I0719 04:40:57.744554       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.1µs"
	
	
	==> kube-proxy [a00b20346964] <==
	I0719 04:31:55.572666       1 server_linux.go:69] "Using iptables proxy"
	I0719 04:31:55.585939       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.168.223"]
	I0719 04:31:55.642049       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 04:31:55.642178       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 04:31:55.642199       1 server_linux.go:165] "Using iptables Proxier"
	I0719 04:31:55.647230       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 04:31:55.647842       1 server.go:872] "Version info" version="v1.30.3"
	I0719 04:31:55.648371       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:31:55.649774       1 config.go:192] "Starting service config controller"
	I0719 04:31:55.649854       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 04:31:55.649964       1 config.go:101] "Starting endpoint slice config controller"
	I0719 04:31:55.650052       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 04:31:55.650805       1 config.go:319] "Starting node config controller"
	I0719 04:31:55.650897       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 04:31:55.750471       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 04:31:55.750691       1 shared_informer.go:320] Caches are synced for service config
	I0719 04:31:55.751483       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6f24d8e2a5f0] <==
	E0719 04:31:37.510406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 04:31:37.596094       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 04:31:37.596452       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 04:31:37.754017       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 04:31:37.755177       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 04:31:37.782066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 04:31:37.782250       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 04:31:37.798014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 04:31:37.798060       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 04:31:37.863607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 04:31:37.863904       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 04:31:37.936229       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 04:31:37.936464       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 04:31:37.961408       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 04:31:37.961834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0719 04:31:39.872832       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 04:40:54.167747       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-njwwk\": pod busybox-fc5497c4f-njwwk is already assigned to node \"ha-062500-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-njwwk" node="ha-062500-m02"
	E0719 04:40:54.167868       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-njwwk\": pod busybox-fc5497c4f-njwwk is already assigned to node \"ha-062500-m03\"" pod="default/busybox-fc5497c4f-njwwk"
	E0719 04:40:54.193078       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-nkb7m\": pod busybox-fc5497c4f-nkb7m is already assigned to node \"ha-062500-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-nkb7m" node="ha-062500-m03"
	E0719 04:40:54.193567       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-nkb7m\": pod busybox-fc5497c4f-nkb7m is already assigned to node \"ha-062500-m02\"" pod="default/busybox-fc5497c4f-nkb7m"
	I0719 04:40:54.201222       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="34690d25-0055-4780-9509-46acd99240e2" pod="default/busybox-fc5497c4f-drzm5" assumedNode="ha-062500" currentNode="ha-062500-m02"
	E0719 04:40:54.241366       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-drzm5\": pod busybox-fc5497c4f-drzm5 is already assigned to node \"ha-062500\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-drzm5" node="ha-062500-m02"
	E0719 04:40:54.241442       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 34690d25-0055-4780-9509-46acd99240e2(default/busybox-fc5497c4f-drzm5) was assumed on ha-062500-m02 but assigned to ha-062500" pod="default/busybox-fc5497c4f-drzm5"
	E0719 04:40:54.241466       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-drzm5\": pod busybox-fc5497c4f-drzm5 is already assigned to node \"ha-062500\"" pod="default/busybox-fc5497c4f-drzm5"
	I0719 04:40:54.241487       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-drzm5" node="ha-062500"
	
	
	==> kubelet <==
	Jul 19 04:38:40 ha-062500 kubelet[2375]: E0719 04:38:40.952103    2375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:38:40 ha-062500 kubelet[2375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:38:40 ha-062500 kubelet[2375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:38:40 ha-062500 kubelet[2375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:38:40 ha-062500 kubelet[2375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:39:40 ha-062500 kubelet[2375]: E0719 04:39:40.945510    2375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:39:40 ha-062500 kubelet[2375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:39:40 ha-062500 kubelet[2375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:39:40 ha-062500 kubelet[2375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:39:40 ha-062500 kubelet[2375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:40:40 ha-062500 kubelet[2375]: E0719 04:40:40.952438    2375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:40:40 ha-062500 kubelet[2375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:40:40 ha-062500 kubelet[2375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:40:40 ha-062500 kubelet[2375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:40:40 ha-062500 kubelet[2375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:40:54 ha-062500 kubelet[2375]: I0719 04:40:54.210714    2375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jpmb4" podStartSLOduration=541.210687126 podStartE2EDuration="9m1.210687126s" podCreationTimestamp="2024-07-19 04:31:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-19 04:32:15.710146785 +0000 UTC m=+35.019063049" watchObservedRunningTime="2024-07-19 04:40:54.210687126 +0000 UTC m=+553.519603390"
	Jul 19 04:40:54 ha-062500 kubelet[2375]: I0719 04:40:54.211862    2375 topology_manager.go:215] "Topology Admit Handler" podUID="34690d25-0055-4780-9509-46acd99240e2" podNamespace="default" podName="busybox-fc5497c4f-drzm5"
	Jul 19 04:40:54 ha-062500 kubelet[2375]: I0719 04:40:54.252754    2375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlfj2\" (UniqueName: \"kubernetes.io/projected/34690d25-0055-4780-9509-46acd99240e2-kube-api-access-nlfj2\") pod \"busybox-fc5497c4f-drzm5\" (UID: \"34690d25-0055-4780-9509-46acd99240e2\") " pod="default/busybox-fc5497c4f-drzm5"
	Jul 19 04:40:55 ha-062500 kubelet[2375]: I0719 04:40:55.289659    2375 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d133268d9d7a083392c792d8717340f916c2e67fdfd99b4ec0c35d377ec662c5"
	Jul 19 04:41:06 ha-062500 kubelet[2375]: E0719 04:41:06.786331    2375 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:38858->127.0.0.1:42261: write tcp 127.0.0.1:38858->127.0.0.1:42261: write: broken pipe
	Jul 19 04:41:40 ha-062500 kubelet[2375]: E0719 04:41:40.953816    2375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:41:40 ha-062500 kubelet[2375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:41:40 ha-062500 kubelet[2375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:41:40 ha-062500 kubelet[2375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:41:40 ha-062500 kubelet[2375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:41:52.351453    6588 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-062500 -n ha-062500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-062500 -n ha-062500: (13.0363295s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-062500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (70.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (48.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-062500 node stop m02 -v=7 --alsologtostderr: exit status 1 (12.3075774s)

                                                
                                                
-- stdout --
	* Stopping node "ha-062500-m02"  ...
	* Powering off "ha-062500-m02" via SSH ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:58:16.967876    3716 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0719 04:58:17.046551    3716 out.go:291] Setting OutFile to fd 700 ...
	I0719 04:58:17.064877    3716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:58:17.064877    3716 out.go:304] Setting ErrFile to fd 716...
	I0719 04:58:17.064877    3716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:58:17.080074    3716 mustload.go:65] Loading cluster: ha-062500
	I0719 04:58:17.080912    3716 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:58:17.080912    3716 stop.go:39] StopHost: ha-062500-m02
	I0719 04:58:17.085727    3716 out.go:177] * Stopping node "ha-062500-m02"  ...
	I0719 04:58:17.088735    3716 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0719 04:58:17.099542    3716 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0719 04:58:17.099542    3716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:58:19.390237    3716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:58:19.390237    3716 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:58:19.390237    3716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:58:22.037567    3716 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:58:22.037567    3716 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:58:22.038445    3716 sshutil.go:53] new ssh client: &{IP:172.28.171.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\id_rsa Username:docker}
	I0719 04:58:22.167286    3716 ssh_runner.go:235] Completed: sudo mkdir -p /var/lib/minikube/backup: (5.0676858s)
	I0719 04:58:22.179451    3716 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0719 04:58:22.258343    3716 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0719 04:58:22.327491    3716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:58:24.546056    3716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:58:24.546056    3716 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:58:24.550524    3716 out.go:177] * Powering off "ha-062500-m02" via SSH ...
	I0719 04:58:24.553241    3716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:58:26.818295    3716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:58:26.818603    3716 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:58:26.818603    3716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-windows-amd64.exe -p ha-062500 node stop m02 -v=7 --alsologtostderr": exit status 1
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-062500 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:372: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-062500 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-062500 -n ha-062500
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-062500 -n ha-062500: (12.7348587s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 logs -n 25: (9.1279441s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| cp      | ha-062500 cp ha-062500-m03:/home/docker/cp-test.txt                                                                       | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3837681421\001\cp-test_ha-062500-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-062500 ssh -n                                                                                                          | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:53 UTC |
	|         | ha-062500-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-062500 cp ha-062500-m03:/home/docker/cp-test.txt                                                                       | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:53 UTC | 19 Jul 24 04:54 UTC |
	|         | ha-062500:/home/docker/cp-test_ha-062500-m03_ha-062500.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-062500 ssh -n                                                                                                          | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:54 UTC | 19 Jul 24 04:54 UTC |
	|         | ha-062500-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-062500 ssh -n ha-062500 sudo cat                                                                                       | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:54 UTC | 19 Jul 24 04:54 UTC |
	|         | /home/docker/cp-test_ha-062500-m03_ha-062500.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-062500 cp ha-062500-m03:/home/docker/cp-test.txt                                                                       | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:54 UTC | 19 Jul 24 04:54 UTC |
	|         | ha-062500-m02:/home/docker/cp-test_ha-062500-m03_ha-062500-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-062500 ssh -n                                                                                                          | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:54 UTC | 19 Jul 24 04:54 UTC |
	|         | ha-062500-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-062500 ssh -n ha-062500-m02 sudo cat                                                                                   | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:54 UTC | 19 Jul 24 04:55 UTC |
	|         | /home/docker/cp-test_ha-062500-m03_ha-062500-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-062500 cp ha-062500-m03:/home/docker/cp-test.txt                                                                       | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:55 UTC | 19 Jul 24 04:55 UTC |
	|         | ha-062500-m04:/home/docker/cp-test_ha-062500-m03_ha-062500-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-062500 ssh -n                                                                                                          | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:55 UTC | 19 Jul 24 04:55 UTC |
	|         | ha-062500-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-062500 ssh -n ha-062500-m04 sudo cat                                                                                   | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:55 UTC | 19 Jul 24 04:55 UTC |
	|         | /home/docker/cp-test_ha-062500-m03_ha-062500-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-062500 cp testdata\cp-test.txt                                                                                         | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:55 UTC | 19 Jul 24 04:55 UTC |
	|         | ha-062500-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-062500 ssh -n                                                                                                          | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:55 UTC | 19 Jul 24 04:56 UTC |
	|         | ha-062500-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-062500 cp ha-062500-m04:/home/docker/cp-test.txt                                                                       | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:56 UTC | 19 Jul 24 04:56 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3837681421\001\cp-test_ha-062500-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-062500 ssh -n                                                                                                          | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:56 UTC | 19 Jul 24 04:56 UTC |
	|         | ha-062500-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-062500 cp ha-062500-m04:/home/docker/cp-test.txt                                                                       | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:56 UTC | 19 Jul 24 04:56 UTC |
	|         | ha-062500:/home/docker/cp-test_ha-062500-m04_ha-062500.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-062500 ssh -n                                                                                                          | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:56 UTC | 19 Jul 24 04:56 UTC |
	|         | ha-062500-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-062500 ssh -n ha-062500 sudo cat                                                                                       | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:56 UTC | 19 Jul 24 04:57 UTC |
	|         | /home/docker/cp-test_ha-062500-m04_ha-062500.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-062500 cp ha-062500-m04:/home/docker/cp-test.txt                                                                       | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:57 UTC | 19 Jul 24 04:57 UTC |
	|         | ha-062500-m02:/home/docker/cp-test_ha-062500-m04_ha-062500-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-062500 ssh -n                                                                                                          | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:57 UTC | 19 Jul 24 04:57 UTC |
	|         | ha-062500-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-062500 ssh -n ha-062500-m02 sudo cat                                                                                   | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:57 UTC | 19 Jul 24 04:57 UTC |
	|         | /home/docker/cp-test_ha-062500-m04_ha-062500-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-062500 cp ha-062500-m04:/home/docker/cp-test.txt                                                                       | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:57 UTC | 19 Jul 24 04:57 UTC |
	|         | ha-062500-m03:/home/docker/cp-test_ha-062500-m04_ha-062500-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-062500 ssh -n                                                                                                          | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:57 UTC | 19 Jul 24 04:58 UTC |
	|         | ha-062500-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-062500 ssh -n ha-062500-m03 sudo cat                                                                                   | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:58 UTC | 19 Jul 24 04:58 UTC |
	|         | /home/docker/cp-test_ha-062500-m04_ha-062500-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-062500 node stop m02 -v=7                                                                                              | ha-062500 | minikube6\jenkins | v1.33.1 | 19 Jul 24 04:58 UTC |                     |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 04:28:29
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 04:28:29.297538    8304 out.go:291] Setting OutFile to fd 732 ...
	I0719 04:28:29.298520    8304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:28:29.298520    8304 out.go:304] Setting ErrFile to fd 896...
	I0719 04:28:29.298520    8304 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 04:28:29.320671    8304 out.go:298] Setting JSON to false
	I0719 04:28:29.323662    8304 start.go:129] hostinfo: {"hostname":"minikube6","uptime":22335,"bootTime":1721340973,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0719 04:28:29.323662    8304 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 04:28:29.332562    8304 out.go:177] * [ha-062500] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 04:28:29.337089    8304 notify.go:220] Checking for updates...
	I0719 04:28:29.338037    8304 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 04:28:29.340092    8304 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 04:28:29.344031    8304 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0719 04:28:29.346479    8304 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 04:28:29.348900    8304 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 04:28:29.352525    8304 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 04:28:34.813257    8304 out.go:177] * Using the hyperv driver based on user configuration
	I0719 04:28:34.818688    8304 start.go:297] selected driver: hyperv
	I0719 04:28:34.818721    8304 start.go:901] validating driver "hyperv" against <nil>
	I0719 04:28:34.818815    8304 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 04:28:34.865459    8304 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 04:28:34.867776    8304 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:28:34.867819    8304 cni.go:84] Creating CNI manager for ""
	I0719 04:28:34.867819    8304 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0719 04:28:34.867819    8304 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 04:28:34.867819    8304 start.go:340] cluster config:
	{Name:ha-062500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-062500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:28:34.868551    8304 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 04:28:34.873638    8304 out.go:177] * Starting "ha-062500" primary control-plane node in "ha-062500" cluster
	I0719 04:28:34.876335    8304 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 04:28:34.876335    8304 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 04:28:34.876335    8304 cache.go:56] Caching tarball of preloaded images
	I0719 04:28:34.876335    8304 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 04:28:34.877211    8304 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 04:28:34.877771    8304 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\config.json ...
	I0719 04:28:34.878040    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\config.json: {Name:mk584e85affd6cb4e038183a910b65d81c19636d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:28:34.878814    8304 start.go:360] acquireMachinesLock for ha-062500: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 04:28:34.878814    8304 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-062500"
	I0719 04:28:34.879818    8304 start.go:93] Provisioning new machine with config: &{Name:ha-062500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-062500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 04:28:34.879818    8304 start.go:125] createHost starting for "" (driver="hyperv")
	I0719 04:28:34.881474    8304 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 04:28:34.882769    8304 start.go:159] libmachine.API.Create for "ha-062500" (driver="hyperv")
	I0719 04:28:34.882769    8304 client.go:168] LocalClient.Create starting
	I0719 04:28:34.883046    8304 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0719 04:28:34.883046    8304 main.go:141] libmachine: Decoding PEM data...
	I0719 04:28:34.883046    8304 main.go:141] libmachine: Parsing certificate...
	I0719 04:28:34.883046    8304 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0719 04:28:34.884172    8304 main.go:141] libmachine: Decoding PEM data...
	I0719 04:28:34.884172    8304 main.go:141] libmachine: Parsing certificate...
	I0719 04:28:34.884392    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0719 04:28:36.977754    8304 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0719 04:28:36.977842    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:28:36.977933    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0719 04:28:38.714552    8304 main.go:141] libmachine: [stdout =====>] : False
	
	I0719 04:28:38.714552    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:28:38.715133    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0719 04:28:40.201910    8304 main.go:141] libmachine: [stdout =====>] : True
	
	I0719 04:28:40.201910    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:28:40.201910    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0719 04:28:43.823071    8304 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0719 04:28:43.823301    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:28:43.825803    8304 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 04:28:44.300316    8304 main.go:141] libmachine: Creating SSH key...
	I0719 04:28:44.495605    8304 main.go:141] libmachine: Creating VM...
	I0719 04:28:44.495605    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0719 04:28:47.306341    8304 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0719 04:28:47.306532    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:28:47.306532    8304 main.go:141] libmachine: Using switch "Default Switch"
	I0719 04:28:47.306532    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0719 04:28:49.080663    8304 main.go:141] libmachine: [stdout =====>] : True
	
	I0719 04:28:49.080663    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:28:49.080663    8304 main.go:141] libmachine: Creating VHD
	I0719 04:28:49.080663    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\fixed.vhd' -SizeBytes 10MB -Fixed
	I0719 04:28:52.898187    8304 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 4A0B9D16-61CF-42E2-A324-34273632452A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0719 04:28:52.898187    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:28:52.898187    8304 main.go:141] libmachine: Writing magic tar header
	I0719 04:28:52.898187    8304 main.go:141] libmachine: Writing SSH key tar header
	I0719 04:28:52.907540    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\disk.vhd' -VHDType Dynamic -DeleteSource
	I0719 04:28:56.142664    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:28:56.142851    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:28:56.143009    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\disk.vhd' -SizeBytes 20000MB
	I0719 04:28:58.749878    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:28:58.749878    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:28:58.750611    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-062500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0719 04:29:03.005022    8304 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-062500 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0719 04:29:03.005495    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:03.005495    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-062500 -DynamicMemoryEnabled $false
	I0719 04:29:05.298546    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:29:05.298546    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:05.298759    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-062500 -Count 2
	I0719 04:29:07.506986    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:29:07.508017    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:07.508017    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-062500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\boot2docker.iso'
	I0719 04:29:10.131526    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:29:10.131526    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:10.132092    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-062500 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\disk.vhd'
	I0719 04:29:12.818947    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:29:12.818947    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:12.819719    8304 main.go:141] libmachine: Starting VM...
	I0719 04:29:12.819719    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-062500
	I0719 04:29:15.994865    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:29:15.994865    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:15.994865    8304 main.go:141] libmachine: Waiting for host to start...
	I0719 04:29:15.994865    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:29:18.358194    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:29:18.358194    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:18.358194    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:29:20.954588    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:29:20.955410    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:21.965774    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:29:24.220460    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:29:24.220460    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:24.221499    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:29:26.756336    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:29:26.756336    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:27.761067    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:29:30.008757    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:29:30.008757    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:30.008757    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:29:32.597755    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:29:32.597755    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:33.602921    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:29:35.903497    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:29:35.903992    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:35.904114    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:29:38.494191    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:29:38.494271    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:39.509806    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:29:41.759742    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:29:41.760025    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:41.760156    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:29:44.323925    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:29:44.323925    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:44.324485    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:29:46.471343    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:29:46.472359    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:46.472442    8304 machine.go:94] provisionDockerMachine start ...
	I0719 04:29:46.472598    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:29:48.697995    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:29:48.697995    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:48.697995    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:29:51.272276    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:29:51.272712    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:51.278246    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:29:51.290034    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.168.223 22 <nil> <nil>}
	I0719 04:29:51.290304    8304 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 04:29:51.420616    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 04:29:51.420616    8304 buildroot.go:166] provisioning hostname "ha-062500"
	I0719 04:29:51.420616    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:29:53.563457    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:29:53.563457    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:53.564563    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:29:56.108767    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:29:56.108767    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:56.113915    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:29:56.114198    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.168.223 22 <nil> <nil>}
	I0719 04:29:56.114198    8304 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-062500 && echo "ha-062500" | sudo tee /etc/hostname
	I0719 04:29:56.267770    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-062500
	
	I0719 04:29:56.267770    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:29:58.461961    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:29:58.462886    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:29:58.462985    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:00.997514    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:00.998432    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:01.003486    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:30:01.004274    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.168.223 22 <nil> <nil>}
	I0719 04:30:01.004274    8304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-062500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-062500/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-062500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:30:01.139934    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:30:01.139934    8304 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 04:30:01.139934    8304 buildroot.go:174] setting up certificates
	I0719 04:30:01.139934    8304 provision.go:84] configureAuth start
	I0719 04:30:01.139934    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:03.372137    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:03.372411    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:03.372411    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:05.959595    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:05.960469    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:05.960469    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:08.193797    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:08.193797    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:08.194157    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:10.740701    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:10.740930    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:10.741015    8304 provision.go:143] copyHostCerts
	I0719 04:30:10.741157    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0719 04:30:10.741157    8304 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0719 04:30:10.741157    8304 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0719 04:30:10.741961    8304 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 04:30:10.743403    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0719 04:30:10.743747    8304 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0719 04:30:10.743852    8304 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0719 04:30:10.744275    8304 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 04:30:10.745568    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0719 04:30:10.745797    8304 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0719 04:30:10.746033    8304 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0719 04:30:10.746180    8304 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 04:30:10.747438    8304 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-062500 san=[127.0.0.1 172.28.168.223 ha-062500 localhost minikube]
	I0719 04:30:11.020383    8304 provision.go:177] copyRemoteCerts
	I0719 04:30:11.031333    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:30:11.031333    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:13.313148    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:13.313418    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:13.313418    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:15.840742    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:15.840742    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:15.841965    8304 sshutil.go:53] new ssh client: &{IP:172.28.168.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\id_rsa Username:docker}
	I0719 04:30:15.938841    8304 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9074512s)
	I0719 04:30:15.939058    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0719 04:30:15.939090    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 04:30:15.984654    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0719 04:30:15.985129    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0719 04:30:16.029208    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0719 04:30:16.029764    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 04:30:16.077116    8304 provision.go:87] duration metric: took 14.9370097s to configureAuth
	I0719 04:30:16.077266    8304 buildroot.go:189] setting minikube options for container-runtime
	I0719 04:30:16.077586    8304 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:30:16.077586    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:18.238006    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:18.238006    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:18.238130    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:20.903519    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:20.903519    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:20.910129    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:30:20.910357    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.168.223 22 <nil> <nil>}
	I0719 04:30:20.910913    8304 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 04:30:21.029131    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 04:30:21.029131    8304 buildroot.go:70] root file system type: tmpfs
	I0719 04:30:21.029244    8304 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 04:30:21.029244    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:23.195122    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:23.195497    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:23.195545    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:25.764832    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:25.764832    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:25.774658    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:30:25.774658    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.168.223 22 <nil> <nil>}
	I0719 04:30:25.774658    8304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 04:30:25.925383    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 04:30:25.925383    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:28.120342    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:28.120342    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:28.120600    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:30.741255    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:30.741362    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:30.746354    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:30:30.747085    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.168.223 22 <nil> <nil>}
	I0719 04:30:30.747085    8304 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 04:30:32.998547    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 04:30:32.998547    8304 machine.go:97] duration metric: took 46.52557s to provisionDockerMachine
	I0719 04:30:32.998547    8304 client.go:171] duration metric: took 1m58.1144198s to LocalClient.Create
	I0719 04:30:32.998547    8304 start.go:167] duration metric: took 1m58.1144198s to libmachine.API.Create "ha-062500"
	I0719 04:30:32.998547    8304 start.go:293] postStartSetup for "ha-062500" (driver="hyperv")
	I0719 04:30:32.998547    8304 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:30:33.010285    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:30:33.010803    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:35.269165    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:35.269165    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:35.269165    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:37.927523    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:37.927523    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:37.928712    8304 sshutil.go:53] new ssh client: &{IP:172.28.168.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\id_rsa Username:docker}
	I0719 04:30:38.031913    8304 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0208885s)
	I0719 04:30:38.046729    8304 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:30:38.054279    8304 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 04:30:38.054426    8304 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 04:30:38.054846    8304 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 04:30:38.055840    8304 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
	I0719 04:30:38.055913    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /etc/ssl/certs/96042.pem
	I0719 04:30:38.068301    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 04:30:38.089278    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
	I0719 04:30:38.134296    8304 start.go:296] duration metric: took 5.1356894s for postStartSetup
	I0719 04:30:38.138222    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:40.288034    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:40.288034    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:40.288034    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:42.846175    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:42.846384    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:42.846568    8304 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\config.json ...
	I0719 04:30:42.850450    8304 start.go:128] duration metric: took 2m7.9691604s to createHost
	I0719 04:30:42.850547    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:44.989708    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:44.989708    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:44.990732    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:47.556391    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:47.556616    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:47.561657    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:30:47.562461    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.168.223 22 <nil> <nil>}
	I0719 04:30:47.562461    8304 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 04:30:47.681705    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721363447.695597945
	
	I0719 04:30:47.681781    8304 fix.go:216] guest clock: 1721363447.695597945
	I0719 04:30:47.681781    8304 fix.go:229] Guest: 2024-07-19 04:30:47.695597945 +0000 UTC Remote: 2024-07-19 04:30:42.8505478 +0000 UTC m=+133.711257901 (delta=4.845050145s)
	I0719 04:30:47.681857    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:49.821604    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:49.821718    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:49.821859    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:52.363260    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:52.363260    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:52.370009    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:30:52.370490    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.168.223 22 <nil> <nil>}
	I0719 04:30:52.370490    8304 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721363447
	I0719 04:30:52.520344    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 04:30:47 UTC 2024
	
	I0719 04:30:52.520344    8304 fix.go:236] clock set: Fri Jul 19 04:30:47 UTC 2024
	 (err=<nil>)
	I0719 04:30:52.520344    8304 start.go:83] releasing machines lock for "ha-062500", held for 2m17.6399464s
	I0719 04:30:52.520344    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:54.698281    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:54.698281    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:54.699018    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:57.305432    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:30:57.305490    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:57.309272    8304 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 04:30:57.309338    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:57.319983    8304 ssh_runner.go:195] Run: cat /version.json
	I0719 04:30:57.319983    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:30:59.580077    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:59.580077    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:59.580077    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:30:59.580077    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:30:59.580077    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:30:59.580077    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:31:02.286703    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:31:02.286703    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:31:02.286703    8304 sshutil.go:53] new ssh client: &{IP:172.28.168.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\id_rsa Username:docker}
	I0719 04:31:02.311907    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:31:02.311954    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:31:02.311954    8304 sshutil.go:53] new ssh client: &{IP:172.28.168.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\id_rsa Username:docker}
	I0719 04:31:02.374052    8304 ssh_runner.go:235] Completed: cat /version.json: (5.0538281s)
	I0719 04:31:02.386353    8304 ssh_runner.go:195] Run: systemctl --version
	I0719 04:31:02.391432    8304 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0820357s)
	W0719 04:31:02.391552    8304 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 04:31:02.421338    8304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 04:31:02.432230    8304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 04:31:02.445310    8304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:31:02.474831    8304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 04:31:02.474831    8304 start.go:495] detecting cgroup driver to use...
	I0719 04:31:02.475116    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:31:02.522321    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0719 04:31:02.533432    8304 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 04:31:02.533432    8304 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 04:31:02.564483    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 04:31:02.586487    8304 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 04:31:02.598483    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 04:31:02.629483    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 04:31:02.660484    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 04:31:02.691686    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 04:31:02.726654    8304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:31:02.762769    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 04:31:02.795795    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 04:31:02.827597    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 04:31:02.858771    8304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:31:02.899899    8304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:31:02.928379    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:31:03.160012    8304 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 04:31:03.194100    8304 start.go:495] detecting cgroup driver to use...
	I0719 04:31:03.206635    8304 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 04:31:03.242425    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:31:03.271415    8304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:31:03.324023    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:31:03.371584    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 04:31:03.417423    8304 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 04:31:03.478913    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 04:31:03.502429    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:31:03.550690    8304 ssh_runner.go:195] Run: which cri-dockerd
	I0719 04:31:03.568953    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 04:31:03.586981    8304 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 04:31:03.630318    8304 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 04:31:03.824136    8304 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 04:31:04.019515    8304 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 04:31:04.019823    8304 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 04:31:04.066066    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:31:04.264017    8304 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 04:31:06.859168    8304 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5951209s)
	I0719 04:31:06.870601    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 04:31:06.908406    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 04:31:06.947108    8304 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 04:31:07.157629    8304 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 04:31:07.365154    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:31:07.566378    8304 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 04:31:07.610799    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 04:31:07.645289    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:31:07.845587    8304 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 04:31:07.955569    8304 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 04:31:07.968546    8304 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 04:31:07.977901    8304 start.go:563] Will wait 60s for crictl version
	I0719 04:31:07.989861    8304 ssh_runner.go:195] Run: which crictl
	I0719 04:31:08.005763    8304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 04:31:08.068961    8304 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 04:31:08.079305    8304 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 04:31:08.123698    8304 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 04:31:08.158974    8304 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0719 04:31:08.159169    8304 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0719 04:31:08.162781    8304 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0719 04:31:08.162781    8304 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0719 04:31:08.162781    8304 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0719 04:31:08.162781    8304 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:e9:18 Flags:up|broadcast|multicast|running}
	I0719 04:31:08.165817    8304 ip.go:210] interface addr: fe80::1dc5:162d:cec2:b9bd/64
	I0719 04:31:08.165817    8304 ip.go:210] interface addr: 172.28.160.1/20
	I0719 04:31:08.176818    8304 ssh_runner.go:195] Run: grep 172.28.160.1	host.minikube.internal$ /etc/hosts
	I0719 04:31:08.183687    8304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:31:08.218421    8304 kubeadm.go:883] updating cluster {Name:ha-062500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3
ClusterName:ha-062500 Namespace:default APIServerHAVIP:172.28.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.168.223 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 04:31:08.218421    8304 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 04:31:08.226923    8304 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 04:31:08.251264    8304 docker.go:685] Got preloaded images: 
	I0719 04:31:08.251264    8304 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0719 04:31:08.262367    8304 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 04:31:08.293963    8304 ssh_runner.go:195] Run: which lz4
	I0719 04:31:08.300137    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0719 04:31:08.321156    8304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 04:31:08.328114    8304 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 04:31:08.329163    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0719 04:31:10.104144    8304 docker.go:649] duration metric: took 1.7934342s to copy over tarball
	I0719 04:31:10.120630    8304 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 04:31:18.664422    8304 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5436942s)
	I0719 04:31:18.664545    8304 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 04:31:18.730961    8304 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 04:31:18.748676    8304 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0719 04:31:18.793529    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:31:19.002759    8304 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 04:31:22.386999    8304 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3840392s)
	I0719 04:31:22.397417    8304 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 04:31:22.432244    8304 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 04:31:22.432244    8304 cache_images.go:84] Images are preloaded, skipping loading
	I0719 04:31:22.432244    8304 kubeadm.go:934] updating node { 172.28.168.223 8443 v1.30.3 docker true true} ...
	I0719 04:31:22.432244    8304 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-062500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.168.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-062500 Namespace:default APIServerHAVIP:172.28.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 04:31:22.443547    8304 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 04:31:22.480066    8304 cni.go:84] Creating CNI manager for ""
	I0719 04:31:22.480066    8304 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 04:31:22.480066    8304 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 04:31:22.480190    8304 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.168.223 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-062500 NodeName:ha-062500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.168.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.168.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 04:31:22.480387    8304 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.168.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-062500"
	  kubeletExtraArgs:
	    node-ip: 172.28.168.223
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.168.223"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 04:31:22.480498    8304 kube-vip.go:115] generating kube-vip config ...
	I0719 04:31:22.491383    8304 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 04:31:22.526409    8304 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 04:31:22.527283    8304 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.175.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0719 04:31:22.539261    8304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 04:31:22.555525    8304 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 04:31:22.569006    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0719 04:31:22.587976    8304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0719 04:31:22.617995    8304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 04:31:22.649751    8304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0719 04:31:22.679804    8304 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0719 04:31:22.725152    8304 ssh_runner.go:195] Run: grep 172.28.175.254	control-plane.minikube.internal$ /etc/hosts
	I0719 04:31:22.731957    8304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.175.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:31:22.765887    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:31:22.958363    8304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:31:22.988918    8304 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500 for IP: 172.28.168.223
	I0719 04:31:22.988918    8304 certs.go:194] generating shared ca certs ...
	I0719 04:31:22.988918    8304 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:23.005617    8304 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0719 04:31:23.022699    8304 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0719 04:31:23.023018    8304 certs.go:256] generating profile certs ...
	I0719 04:31:23.023283    8304 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\client.key
	I0719 04:31:23.023817    8304 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\client.crt with IP's: []
	I0719 04:31:23.133804    8304 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\client.crt ...
	I0719 04:31:23.133804    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\client.crt: {Name:mk2fd3422fb14cd0850d18aa8c21329d8e241619 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:23.135311    8304 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\client.key ...
	I0719 04:31:23.135311    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\client.key: {Name:mkc7c7bf529d2be753ba98c145eb5a351142671b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:23.136427    8304 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.7e2b9445
	I0719 04:31:23.137063    8304 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.7e2b9445 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.168.223 172.28.175.254]
	I0719 04:31:23.399307    8304 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.7e2b9445 ...
	I0719 04:31:23.399307    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.7e2b9445: {Name:mk95e206c16a42dba6cfde7871ec451eb4f8d55b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:23.400093    8304 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.7e2b9445 ...
	I0719 04:31:23.401170    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.7e2b9445: {Name:mk2e2946de104f605963884e075902631228a152 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:23.401398    8304 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.7e2b9445 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt
	I0719 04:31:23.413523    8304 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.7e2b9445 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key
	I0719 04:31:23.415651    8304 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key
	I0719 04:31:23.415651    8304 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.crt with IP's: []
	I0719 04:31:23.655329    8304 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.crt ...
	I0719 04:31:23.656278    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.crt: {Name:mkcde0934063d4bb3c8946462ef361cd0b8a0a56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:23.656660    8304 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key ...
	I0719 04:31:23.656660    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key: {Name:mke80e57cf095435127648d70814bc8a36740f5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:23.657895    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 04:31:23.658896    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0719 04:31:23.659073    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 04:31:23.659216    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 04:31:23.659406    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 04:31:23.659572    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 04:31:23.659730    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 04:31:23.668911    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 04:31:23.669963    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem (1338 bytes)
	W0719 04:31:23.678074    8304 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604_empty.pem, impossibly tiny 0 bytes
	I0719 04:31:23.678161    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0719 04:31:23.678161    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0719 04:31:23.678935    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0719 04:31:23.679235    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0719 04:31:23.679945    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem (1708 bytes)
	I0719 04:31:23.680191    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /usr/share/ca-certificates/96042.pem
	I0719 04:31:23.680473    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:31:23.680473    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem -> /usr/share/ca-certificates/9604.pem
	I0719 04:31:23.681988    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 04:31:23.734526    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 04:31:23.777427    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 04:31:23.827330    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 04:31:23.869955    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 04:31:23.917444    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 04:31:23.962458    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 04:31:24.010772    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 04:31:24.062116    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /usr/share/ca-certificates/96042.pem (1708 bytes)
	I0719 04:31:24.109468    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 04:31:24.158262    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem --> /usr/share/ca-certificates/9604.pem (1338 bytes)
	I0719 04:31:24.205554    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 04:31:24.252567    8304 ssh_runner.go:195] Run: openssl version
	I0719 04:31:24.273960    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 04:31:24.303988    8304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:31:24.311002    8304 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:31:24.321015    8304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:31:24.342547    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 04:31:24.373910    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9604.pem && ln -fs /usr/share/ca-certificates/9604.pem /etc/ssl/certs/9604.pem"
	I0719 04:31:24.403746    8304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9604.pem
	I0719 04:31:24.410732    8304 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:46 /usr/share/ca-certificates/9604.pem
	I0719 04:31:24.421945    8304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9604.pem
	I0719 04:31:24.443979    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9604.pem /etc/ssl/certs/51391683.0"
	I0719 04:31:24.475015    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96042.pem && ln -fs /usr/share/ca-certificates/96042.pem /etc/ssl/certs/96042.pem"
	I0719 04:31:24.505745    8304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96042.pem
	I0719 04:31:24.512338    8304 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:46 /usr/share/ca-certificates/96042.pem
	I0719 04:31:24.525325    8304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96042.pem
	I0719 04:31:24.544436    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96042.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 04:31:24.579894    8304 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 04:31:24.587452    8304 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 04:31:24.587932    8304 kubeadm.go:392] StartCluster: {Name:ha-062500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clu
sterName:ha-062500 Namespace:default APIServerHAVIP:172.28.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.168.223 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:
[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:31:24.599080    8304 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 04:31:24.631855    8304 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 04:31:24.658310    8304 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 04:31:24.693636    8304 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 04:31:24.710350    8304 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 04:31:24.710350    8304 kubeadm.go:157] found existing configuration files:
	
	I0719 04:31:24.721446    8304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 04:31:24.737559    8304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 04:31:24.749042    8304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 04:31:24.779855    8304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 04:31:24.797933    8304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 04:31:24.810533    8304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 04:31:24.840759    8304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 04:31:24.858678    8304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 04:31:24.872995    8304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 04:31:24.904317    8304 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 04:31:24.927117    8304 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 04:31:24.946611    8304 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 04:31:24.970150    8304 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 04:31:25.492108    8304 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 04:31:41.407301    8304 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 04:31:41.407431    8304 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 04:31:41.407643    8304 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 04:31:41.407956    8304 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 04:31:41.408310    8304 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 04:31:41.408456    8304 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 04:31:41.411239    8304 out.go:204]   - Generating certificates and keys ...
	I0719 04:31:41.411571    8304 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 04:31:41.411702    8304 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 04:31:41.411702    8304 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 04:31:41.411702    8304 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 04:31:41.412249    8304 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 04:31:41.412406    8304 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 04:31:41.412554    8304 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 04:31:41.412657    8304 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-062500 localhost] and IPs [172.28.168.223 127.0.0.1 ::1]
	I0719 04:31:41.412657    8304 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 04:31:41.413246    8304 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-062500 localhost] and IPs [172.28.168.223 127.0.0.1 ::1]
	I0719 04:31:41.413577    8304 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 04:31:41.413777    8304 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 04:31:41.413962    8304 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 04:31:41.414087    8304 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 04:31:41.414087    8304 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 04:31:41.414087    8304 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 04:31:41.414087    8304 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 04:31:41.414630    8304 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 04:31:41.414929    8304 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 04:31:41.415094    8304 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 04:31:41.415094    8304 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 04:31:41.418086    8304 out.go:204]   - Booting up control plane ...
	I0719 04:31:41.418231    8304 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 04:31:41.418231    8304 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 04:31:41.418231    8304 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 04:31:41.418918    8304 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 04:31:41.418918    8304 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 04:31:41.418918    8304 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 04:31:41.419497    8304 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 04:31:41.419755    8304 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 04:31:41.419755    8304 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002479396s
	I0719 04:31:41.419755    8304 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 04:31:41.419755    8304 kubeadm.go:310] [api-check] The API server is healthy after 8.982301087s
	I0719 04:31:41.420464    8304 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 04:31:41.420573    8304 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 04:31:41.420573    8304 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 04:31:41.421245    8304 kubeadm.go:310] [mark-control-plane] Marking the node ha-062500 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 04:31:41.421459    8304 kubeadm.go:310] [bootstrap-token] Using token: obov36.teetl7w3d3ffgrf9
	I0719 04:31:41.428385    8304 out.go:204]   - Configuring RBAC rules ...
	I0719 04:31:41.428385    8304 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 04:31:41.428385    8304 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 04:31:41.429172    8304 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 04:31:41.429565    8304 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 04:31:41.429808    8304 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 04:31:41.430048    8304 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 04:31:41.430048    8304 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 04:31:41.430048    8304 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 04:31:41.430048    8304 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 04:31:41.430627    8304 kubeadm.go:310] 
	I0719 04:31:41.430959    8304 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 04:31:41.430990    8304 kubeadm.go:310] 
	I0719 04:31:41.431170    8304 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 04:31:41.431251    8304 kubeadm.go:310] 
	I0719 04:31:41.431351    8304 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 04:31:41.431439    8304 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 04:31:41.431513    8304 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 04:31:41.431513    8304 kubeadm.go:310] 
	I0719 04:31:41.431513    8304 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 04:31:41.431513    8304 kubeadm.go:310] 
	I0719 04:31:41.431513    8304 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 04:31:41.431513    8304 kubeadm.go:310] 
	I0719 04:31:41.431513    8304 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 04:31:41.431513    8304 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 04:31:41.432242    8304 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 04:31:41.432273    8304 kubeadm.go:310] 
	I0719 04:31:41.432446    8304 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 04:31:41.432605    8304 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 04:31:41.432605    8304 kubeadm.go:310] 
	I0719 04:31:41.432795    8304 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token obov36.teetl7w3d3ffgrf9 \
	I0719 04:31:41.433032    8304 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 \
	I0719 04:31:41.433032    8304 kubeadm.go:310] 	--control-plane 
	I0719 04:31:41.433032    8304 kubeadm.go:310] 
	I0719 04:31:41.433388    8304 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 04:31:41.433388    8304 kubeadm.go:310] 
	I0719 04:31:41.433569    8304 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token obov36.teetl7w3d3ffgrf9 \
	I0719 04:31:41.433569    8304 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 
	I0719 04:31:41.433956    8304 cni.go:84] Creating CNI manager for ""
	I0719 04:31:41.433999    8304 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 04:31:41.444441    8304 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0719 04:31:41.458795    8304 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0719 04:31:41.466212    8304 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0719 04:31:41.466212    8304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0719 04:31:41.510042    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0719 04:31:42.096873    8304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 04:31:42.110585    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:42.112556    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-062500 minikube.k8s.io/updated_at=2024_07_19T04_31_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-062500 minikube.k8s.io/primary=true
	I0719 04:31:42.122560    8304 ops.go:34] apiserver oom_adj: -16
	I0719 04:31:42.376213    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:42.888962    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:43.388066    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:43.876340    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:44.383641    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:44.885698    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:45.384539    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:45.886261    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:46.388409    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:46.889939    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:47.389444    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:47.889000    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:48.390560    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:48.876684    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:49.377749    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:49.885613    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:50.386581    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:50.886094    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:51.390496    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:51.879609    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:52.377332    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:52.880696    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:53.387270    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:53.880228    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 04:31:54.052466    8304 kubeadm.go:1113] duration metric: took 11.9554556s to wait for elevateKubeSystemPrivileges
	I0719 04:31:54.053543    8304 kubeadm.go:394] duration metric: took 29.4652724s to StartCluster
	I0719 04:31:54.053543    8304 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:54.053543    8304 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 04:31:54.055056    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:31:54.056426    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 04:31:54.056426    8304 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.28.168.223 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 04:31:54.056426    8304 start.go:241] waiting for startup goroutines ...
	I0719 04:31:54.056426    8304 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 04:31:54.056426    8304 addons.go:69] Setting default-storageclass=true in profile "ha-062500"
	I0719 04:31:54.056426    8304 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-062500"
	I0719 04:31:54.056426    8304 addons.go:69] Setting storage-provisioner=true in profile "ha-062500"
	I0719 04:31:54.056957    8304 addons.go:234] Setting addon storage-provisioner=true in "ha-062500"
	I0719 04:31:54.057199    8304 host.go:66] Checking if "ha-062500" exists ...
	I0719 04:31:54.057274    8304 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:31:54.058857    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:31:54.059645    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:31:54.281265    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 04:31:54.630390    8304 start.go:971] {"host.minikube.internal": 172.28.160.1} host record injected into CoreDNS's ConfigMap
	I0719 04:31:56.592369    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:31:56.593278    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:31:56.594077    8304 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 04:31:56.594710    8304 kapi.go:59] client config for ha-062500: &rest.Config{Host:"https://172.28.175.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-062500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-062500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 04:31:56.596607    8304 cert_rotation.go:137] Starting client certificate rotation controller
	I0719 04:31:56.597001    8304 addons.go:234] Setting addon default-storageclass=true in "ha-062500"
	I0719 04:31:56.597212    8304 host.go:66] Checking if "ha-062500" exists ...
	I0719 04:31:56.597891    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:31:56.637973    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:31:56.638352    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:31:56.641754    8304 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 04:31:56.645618    8304 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 04:31:56.645618    8304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 04:31:56.645696    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:31:58.946591    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:31:58.946591    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:31:58.946591    8304 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 04:31:58.946591    8304 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 04:31:58.946591    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:31:59.010398    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:31:59.010578    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:31:59.010675    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:32:01.339884    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:32:01.340131    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:01.340131    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:32:01.817521    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:32:01.817699    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:01.818392    8304 sshutil.go:53] new ssh client: &{IP:172.28.168.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\id_rsa Username:docker}
	I0719 04:32:01.980812    8304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 04:32:04.034807    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:32:04.034807    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:04.034807    8304 sshutil.go:53] new ssh client: &{IP:172.28.168.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\id_rsa Username:docker}
	I0719 04:32:04.170047    8304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 04:32:04.326503    8304 round_trippers.go:463] GET https://172.28.175.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0719 04:32:04.326549    8304 round_trippers.go:469] Request Headers:
	I0719 04:32:04.326594    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:32:04.326594    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:32:04.339165    8304 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0719 04:32:04.340467    8304 round_trippers.go:463] PUT https://172.28.175.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0719 04:32:04.340549    8304 round_trippers.go:469] Request Headers:
	I0719 04:32:04.340549    8304 round_trippers.go:473]     Content-Type: application/json
	I0719 04:32:04.340549    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:32:04.340549    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:32:04.350978    8304 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0719 04:32:04.355917    8304 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0719 04:32:04.360570    8304 addons.go:510] duration metric: took 10.3040256s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0719 04:32:04.360570    8304 start.go:246] waiting for cluster config update ...
	I0719 04:32:04.360570    8304 start.go:255] writing updated cluster config ...
	I0719 04:32:04.366652    8304 out.go:177] 
	I0719 04:32:04.375410    8304 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:32:04.375410    8304 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\config.json ...
	I0719 04:32:04.382126    8304 out.go:177] * Starting "ha-062500-m02" control-plane node in "ha-062500" cluster
	I0719 04:32:04.385631    8304 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 04:32:04.385631    8304 cache.go:56] Caching tarball of preloaded images
	I0719 04:32:04.385631    8304 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 04:32:04.386531    8304 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 04:32:04.386531    8304 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\config.json ...
	I0719 04:32:04.389466    8304 start.go:360] acquireMachinesLock for ha-062500-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 04:32:04.390558    8304 start.go:364] duration metric: took 1.092ms to acquireMachinesLock for "ha-062500-m02"
	I0719 04:32:04.390696    8304 start.go:93] Provisioning new machine with config: &{Name:ha-062500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-062500 Namespace:default APIServerHAVIP:172.28.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.168.223 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 04:32:04.390696    8304 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0719 04:32:04.395665    8304 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 04:32:04.396663    8304 start.go:159] libmachine.API.Create for "ha-062500" (driver="hyperv")
	I0719 04:32:04.396663    8304 client.go:168] LocalClient.Create starting
	I0719 04:32:04.397202    8304 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0719 04:32:04.397499    8304 main.go:141] libmachine: Decoding PEM data...
	I0719 04:32:04.397499    8304 main.go:141] libmachine: Parsing certificate...
	I0719 04:32:04.397617    8304 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0719 04:32:04.397617    8304 main.go:141] libmachine: Decoding PEM data...
	I0719 04:32:04.397617    8304 main.go:141] libmachine: Parsing certificate...
	I0719 04:32:04.397617    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0719 04:32:06.298953    8304 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0719 04:32:06.300094    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:06.300094    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0719 04:32:08.095139    8304 main.go:141] libmachine: [stdout =====>] : False
	
	I0719 04:32:08.095139    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:08.095139    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0719 04:32:09.625371    8304 main.go:141] libmachine: [stdout =====>] : True
	
	I0719 04:32:09.625371    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:09.625460    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0719 04:32:13.347945    8304 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0719 04:32:13.347945    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:13.352053    8304 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 04:32:13.777804    8304 main.go:141] libmachine: Creating SSH key...
	I0719 04:32:14.037694    8304 main.go:141] libmachine: Creating VM...
	I0719 04:32:14.037830    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0719 04:32:17.000998    8304 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0719 04:32:17.000998    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:17.001084    8304 main.go:141] libmachine: Using switch "Default Switch"
	I0719 04:32:17.001163    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0719 04:32:18.764615    8304 main.go:141] libmachine: [stdout =====>] : True
	
	I0719 04:32:18.764615    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:18.764615    8304 main.go:141] libmachine: Creating VHD
	I0719 04:32:18.765025    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0719 04:32:22.666040    8304 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 96805E3E-94FF-481A-B83B-4C9A63A1B868
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0719 04:32:22.666040    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:22.666040    8304 main.go:141] libmachine: Writing magic tar header
	I0719 04:32:22.666040    8304 main.go:141] libmachine: Writing SSH key tar header
	I0719 04:32:22.675906    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0719 04:32:25.945431    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:32:25.945575    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:25.945575    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\disk.vhd' -SizeBytes 20000MB
	I0719 04:32:28.523659    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:32:28.523659    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:28.523848    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-062500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0719 04:32:32.202348    8304 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-062500-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0719 04:32:32.202348    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:32.202429    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-062500-m02 -DynamicMemoryEnabled $false
	I0719 04:32:34.507336    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:32:34.508072    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:34.508229    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-062500-m02 -Count 2
	I0719 04:32:36.738293    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:32:36.738293    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:36.739090    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-062500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\boot2docker.iso'
	I0719 04:32:39.368162    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:32:39.368724    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:39.368724    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-062500-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\disk.vhd'
	I0719 04:32:42.055680    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:32:42.055748    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:42.055748    8304 main.go:141] libmachine: Starting VM...
	I0719 04:32:42.055748    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-062500-m02
	I0719 04:32:45.177105    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:32:45.177105    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:45.177105    8304 main.go:141] libmachine: Waiting for host to start...
	I0719 04:32:45.177105    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:32:47.553788    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:32:47.554480    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:47.554480    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:32:50.127086    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:32:50.127086    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:51.128745    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:32:53.386161    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:32:53.386610    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:53.386610    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:32:55.953528    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:32:55.953528    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:56.953782    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:32:59.267522    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:32:59.267522    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:32:59.268144    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:01.867195    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:33:01.867598    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:02.875690    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:05.143930    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:05.143982    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:05.143982    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:07.759834    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:33:07.759834    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:08.761012    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:11.105424    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:11.105424    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:11.105424    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:13.774101    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:33:13.774101    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:13.774359    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:15.966951    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:15.968130    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:15.968130    8304 machine.go:94] provisionDockerMachine start ...
	I0719 04:33:15.968338    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:18.245918    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:18.246089    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:18.246177    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:20.863605    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:33:20.864414    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:20.869963    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:33:20.880042    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.171.55 22 <nil> <nil>}
	I0719 04:33:20.881052    8304 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 04:33:21.013217    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 04:33:21.013217    8304 buildroot.go:166] provisioning hostname "ha-062500-m02"
	I0719 04:33:21.013453    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:23.271386    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:23.271386    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:23.271386    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:25.882495    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:33:25.882495    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:25.887874    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:33:25.887913    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.171.55 22 <nil> <nil>}
	I0719 04:33:25.887913    8304 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-062500-m02 && echo "ha-062500-m02" | sudo tee /etc/hostname
	I0719 04:33:26.064397    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-062500-m02
	
	I0719 04:33:26.064513    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:28.330951    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:28.331953    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:28.332210    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:30.937238    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:33:30.937238    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:30.943341    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:33:30.944113    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.171.55 22 <nil> <nil>}
	I0719 04:33:30.944113    8304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-062500-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-062500-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-062500-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:33:31.098166    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:33:31.098166    8304 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 04:33:31.098276    8304 buildroot.go:174] setting up certificates
	I0719 04:33:31.098276    8304 provision.go:84] configureAuth start
	I0719 04:33:31.098381    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:33.328370    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:33.328370    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:33.328370    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:35.920852    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:33:35.920852    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:35.921035    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:38.141107    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:38.141377    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:38.141377    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:40.743142    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:33:40.743825    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:40.743825    8304 provision.go:143] copyHostCerts
	I0719 04:33:40.744024    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0719 04:33:40.744580    8304 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0719 04:33:40.744580    8304 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0719 04:33:40.745165    8304 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 04:33:40.746335    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0719 04:33:40.746555    8304 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0719 04:33:40.746672    8304 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0719 04:33:40.747036    8304 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 04:33:40.748162    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0719 04:33:40.748350    8304 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0719 04:33:40.748350    8304 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0719 04:33:40.748350    8304 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 04:33:40.749893    8304 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-062500-m02 san=[127.0.0.1 172.28.171.55 ha-062500-m02 localhost minikube]
	I0719 04:33:40.832973    8304 provision.go:177] copyRemoteCerts
	I0719 04:33:40.843026    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:33:40.843026    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:43.006903    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:43.007430    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:43.007430    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:45.586850    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:33:45.586850    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:45.588009    8304 sshutil.go:53] new ssh client: &{IP:172.28.171.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\id_rsa Username:docker}
	I0719 04:33:45.697056    8304 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8539736s)
	I0719 04:33:45.697106    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0719 04:33:45.697636    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 04:33:45.744118    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0719 04:33:45.744613    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 04:33:45.790380    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0719 04:33:45.790932    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 04:33:45.836575    8304 provision.go:87] duration metric: took 14.7381305s to configureAuth
	I0719 04:33:45.836575    8304 buildroot.go:189] setting minikube options for container-runtime
	I0719 04:33:45.837392    8304 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:33:45.837392    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:47.997471    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:47.998259    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:47.998259    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:50.565843    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:33:50.566672    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:50.572072    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:33:50.572773    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.171.55 22 <nil> <nil>}
	I0719 04:33:50.572773    8304 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 04:33:50.713088    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 04:33:50.713162    8304 buildroot.go:70] root file system type: tmpfs
	I0719 04:33:50.713398    8304 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 04:33:50.713467    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:52.871204    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:52.871999    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:52.872067    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:33:55.498921    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:33:55.499145    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:55.503986    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:33:55.504607    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.171.55 22 <nil> <nil>}
	I0719 04:33:55.504607    8304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.168.223"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 04:33:55.662177    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.168.223
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 04:33:55.662349    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:33:57.829807    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:33:57.829807    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:33:57.830048    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:34:00.399637    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:34:00.400649    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:00.408146    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:34:00.409178    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.171.55 22 <nil> <nil>}
	I0719 04:34:00.409178    8304 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 04:34:02.637936    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 04:34:02.637936    8304 machine.go:97] duration metric: took 46.669269s to provisionDockerMachine
	I0719 04:34:02.637936    8304 client.go:171] duration metric: took 1m58.2399132s to LocalClient.Create
	I0719 04:34:02.637936    8304 start.go:167] duration metric: took 1m58.2399511s to libmachine.API.Create "ha-062500"
	I0719 04:34:02.637936    8304 start.go:293] postStartSetup for "ha-062500-m02" (driver="hyperv")
	I0719 04:34:02.637936    8304 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:34:02.650439    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:34:02.650439    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:34:04.821072    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:34:04.822078    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:04.822320    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:34:07.475069    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:34:07.475069    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:07.475577    8304 sshutil.go:53] new ssh client: &{IP:172.28.171.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\id_rsa Username:docker}
	I0719 04:34:07.595359    8304 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9448635s)
	I0719 04:34:07.606784    8304 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:34:07.613198    8304 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 04:34:07.613198    8304 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 04:34:07.613732    8304 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 04:34:07.614626    8304 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
	I0719 04:34:07.614626    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /etc/ssl/certs/96042.pem
	I0719 04:34:07.626810    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 04:34:07.642993    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
	I0719 04:34:07.694537    8304 start.go:296] duration metric: took 5.0565427s for postStartSetup
	I0719 04:34:07.697850    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:34:09.912066    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:34:09.912066    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:09.912747    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:34:12.507225    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:34:12.507225    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:12.507667    8304 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\config.json ...
	I0719 04:34:12.510126    8304 start.go:128] duration metric: took 2m8.1179575s to createHost
	I0719 04:34:12.510221    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:34:14.692063    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:34:14.692119    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:14.692119    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:34:17.297859    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:34:17.297859    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:17.303950    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:34:17.304843    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.171.55 22 <nil> <nil>}
	I0719 04:34:17.304843    8304 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 04:34:17.436003    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721363657.451249714
	
	I0719 04:34:17.436195    8304 fix.go:216] guest clock: 1721363657.451249714
	I0719 04:34:17.436195    8304 fix.go:229] Guest: 2024-07-19 04:34:17.451249714 +0000 UTC Remote: 2024-07-19 04:34:12.5101269 +0000 UTC m=+343.368425901 (delta=4.941122814s)
	I0719 04:34:17.436268    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:34:19.636508    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:34:19.636555    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:19.636625    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:34:22.236982    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:34:22.238004    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:22.243426    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:34:22.243426    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.171.55 22 <nil> <nil>}
	I0719 04:34:22.243426    8304 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721363657
	I0719 04:34:22.389651    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 04:34:17 UTC 2024
	
	I0719 04:34:22.389651    8304 fix.go:236] clock set: Fri Jul 19 04:34:17 UTC 2024
	 (err=<nil>)
	I0719 04:34:22.389651    8304 start.go:83] releasing machines lock for "ha-062500-m02", held for 2m17.9975065s
	I0719 04:34:22.390296    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:34:24.567457    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:34:24.567518    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:24.567642    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:34:27.167869    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:34:27.167869    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:27.172140    8304 out.go:177] * Found network options:
	I0719 04:34:27.176410    8304 out.go:177]   - NO_PROXY=172.28.168.223
	W0719 04:34:27.179041    8304 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 04:34:27.181073    8304 out.go:177]   - NO_PROXY=172.28.168.223
	W0719 04:34:27.183782    8304 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 04:34:27.185180    8304 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 04:34:27.187159    8304 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 04:34:27.187547    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:34:27.196167    8304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 04:34:27.196167    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m02 ).state
	I0719 04:34:29.476081    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:34:29.476132    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:29.476132    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:34:29.508118    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:34:29.508188    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:29.508188    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 04:34:32.259057    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:34:32.260203    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:32.260741    8304 sshutil.go:53] new ssh client: &{IP:172.28.171.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\id_rsa Username:docker}
	I0719 04:34:32.288986    8304 main.go:141] libmachine: [stdout =====>] : 172.28.171.55
	
	I0719 04:34:32.289401    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:32.289803    8304 sshutil.go:53] new ssh client: &{IP:172.28.171.55 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m02\id_rsa Username:docker}
	I0719 04:34:32.359244    8304 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1720252s)
	W0719 04:34:32.359244    8304 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 04:34:32.392908    8304 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1965774s)
	W0719 04:34:32.392908    8304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 04:34:32.404240    8304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:34:32.433447    8304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 04:34:32.433590    8304 start.go:495] detecting cgroup driver to use...
	I0719 04:34:32.433725    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:34:32.482405    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0719 04:34:32.498052    8304 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 04:34:32.498052    8304 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 04:34:32.518266    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 04:34:32.541147    8304 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 04:34:32.552815    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 04:34:32.587644    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 04:34:32.621594    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 04:34:32.651777    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 04:34:32.683090    8304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:34:32.714311    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 04:34:32.744836    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 04:34:32.775714    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 04:34:32.807380    8304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:34:32.835365    8304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:34:32.863030    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:34:33.068054    8304 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 04:34:33.102673    8304 start.go:495] detecting cgroup driver to use...
	I0719 04:34:33.115011    8304 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 04:34:33.158387    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:34:33.193643    8304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:34:33.242334    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:34:33.278542    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 04:34:33.316342    8304 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 04:34:33.378803    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 04:34:33.402685    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:34:33.447850    8304 ssh_runner.go:195] Run: which cri-dockerd
	I0719 04:34:33.467514    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 04:34:33.483586    8304 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 04:34:33.524283    8304 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 04:34:33.713836    8304 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 04:34:33.908580    8304 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 04:34:33.908689    8304 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 04:34:33.953596    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:34:34.166124    8304 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 04:34:36.744490    8304 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5783372s)
	I0719 04:34:36.756888    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 04:34:36.796399    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 04:34:36.837690    8304 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 04:34:37.036882    8304 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 04:34:37.256772    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:34:37.465459    8304 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 04:34:37.507073    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 04:34:37.543421    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:34:37.739417    8304 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 04:34:37.851653    8304 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 04:34:37.864055    8304 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 04:34:37.873758    8304 start.go:563] Will wait 60s for crictl version
	I0719 04:34:37.884712    8304 ssh_runner.go:195] Run: which crictl
	I0719 04:34:37.901145    8304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 04:34:37.956329    8304 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 04:34:37.964898    8304 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 04:34:38.004668    8304 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 04:34:38.042482    8304 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0719 04:34:38.045021    8304 out.go:177]   - env NO_PROXY=172.28.168.223
	I0719 04:34:38.047746    8304 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0719 04:34:38.051840    8304 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0719 04:34:38.051840    8304 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0719 04:34:38.051840    8304 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0719 04:34:38.051840    8304 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:e9:18 Flags:up|broadcast|multicast|running}
	I0719 04:34:38.054486    8304 ip.go:210] interface addr: fe80::1dc5:162d:cec2:b9bd/64
	I0719 04:34:38.054486    8304 ip.go:210] interface addr: 172.28.160.1/20
	I0719 04:34:38.065447    8304 ssh_runner.go:195] Run: grep 172.28.160.1	host.minikube.internal$ /etc/hosts
	I0719 04:34:38.072444    8304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:34:38.094111    8304 mustload.go:65] Loading cluster: ha-062500
	I0719 04:34:38.094530    8304 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:34:38.095513    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:34:40.262701    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:34:40.262701    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:40.262701    8304 host.go:66] Checking if "ha-062500" exists ...
	I0719 04:34:40.264001    8304 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500 for IP: 172.28.171.55
	I0719 04:34:40.264001    8304 certs.go:194] generating shared ca certs ...
	I0719 04:34:40.264103    8304 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:34:40.264661    8304 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0719 04:34:40.265284    8304 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0719 04:34:40.265455    8304 certs.go:256] generating profile certs ...
	I0719 04:34:40.266198    8304 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\client.key
	I0719 04:34:40.266467    8304 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.37640fbc
	I0719 04:34:40.266690    8304 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.37640fbc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.168.223 172.28.171.55 172.28.175.254]
	I0719 04:34:40.343987    8304 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.37640fbc ...
	I0719 04:34:40.343987    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.37640fbc: {Name:mkd78f4da0d794c5fe5aee03af6db8c88c496c91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:34:40.344928    8304 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.37640fbc ...
	I0719 04:34:40.344928    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.37640fbc: {Name:mkf2ec88ef353924c4d5486fd8616e188139fa66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:34:40.345964    8304 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.37640fbc -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt
	I0719 04:34:40.359866    8304 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.37640fbc -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key
	I0719 04:34:40.361167    8304 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key
	I0719 04:34:40.361167    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 04:34:40.361316    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0719 04:34:40.361316    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 04:34:40.361316    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 04:34:40.361316    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 04:34:40.361884    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 04:34:40.361884    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 04:34:40.361884    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 04:34:40.362822    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem (1338 bytes)
	W0719 04:34:40.363434    8304 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604_empty.pem, impossibly tiny 0 bytes
	I0719 04:34:40.363671    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0719 04:34:40.363879    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0719 04:34:40.364350    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0719 04:34:40.364649    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0719 04:34:40.365268    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem (1708 bytes)
	I0719 04:34:40.365268    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /usr/share/ca-certificates/96042.pem
	I0719 04:34:40.365268    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:34:40.365268    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem -> /usr/share/ca-certificates/9604.pem
	I0719 04:34:40.365936    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:34:42.572033    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:34:42.572033    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:42.572033    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:34:45.194645    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:34:45.195202    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:45.195800    8304 sshutil.go:53] new ssh client: &{IP:172.28.168.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\id_rsa Username:docker}
	I0719 04:34:45.295317    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0719 04:34:45.303478    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0719 04:34:45.336347    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0719 04:34:45.343055    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0719 04:34:45.373015    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0719 04:34:45.379629    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0719 04:34:45.416639    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0719 04:34:45.423556    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0719 04:34:45.453544    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0719 04:34:45.460767    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0719 04:34:45.496227    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0719 04:34:45.503532    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0719 04:34:45.522638    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 04:34:45.571865    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 04:34:45.619630    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 04:34:45.665337    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 04:34:45.709951    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0719 04:34:45.760470    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 04:34:45.804636    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 04:34:45.849004    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 04:34:45.897086    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /usr/share/ca-certificates/96042.pem (1708 bytes)
	I0719 04:34:45.940879    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 04:34:45.985517    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem --> /usr/share/ca-certificates/9604.pem (1338 bytes)
	I0719 04:34:46.031305    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0719 04:34:46.069125    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0719 04:34:46.100426    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0719 04:34:46.130850    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0719 04:34:46.162096    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0719 04:34:46.193989    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0719 04:34:46.225184    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0719 04:34:46.271254    8304 ssh_runner.go:195] Run: openssl version
	I0719 04:34:46.290344    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96042.pem && ln -fs /usr/share/ca-certificates/96042.pem /etc/ssl/certs/96042.pem"
	I0719 04:34:46.319149    8304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96042.pem
	I0719 04:34:46.327467    8304 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:46 /usr/share/ca-certificates/96042.pem
	I0719 04:34:46.340009    8304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96042.pem
	I0719 04:34:46.360967    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96042.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 04:34:46.391711    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 04:34:46.422919    8304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:34:46.429831    8304 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:34:46.441753    8304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:34:46.460915    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 04:34:46.494175    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9604.pem && ln -fs /usr/share/ca-certificates/9604.pem /etc/ssl/certs/9604.pem"
	I0719 04:34:46.526282    8304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9604.pem
	I0719 04:34:46.532052    8304 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:46 /usr/share/ca-certificates/9604.pem
	I0719 04:34:46.544154    8304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9604.pem
	I0719 04:34:46.565716    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9604.pem /etc/ssl/certs/51391683.0"
	I0719 04:34:46.618565    8304 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 04:34:46.625848    8304 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 04:34:46.626396    8304 kubeadm.go:934] updating node {m02 172.28.171.55 8443 v1.30.3 docker true true} ...
	I0719 04:34:46.626748    8304 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-062500-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.171.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-062500 Namespace:default APIServerHAVIP:172.28.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 04:34:46.626748    8304 kube-vip.go:115] generating kube-vip config ...
	I0719 04:34:46.640179    8304 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 04:34:46.672859    8304 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 04:34:46.672956    8304 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.175.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 04:34:46.685240    8304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 04:34:46.704965    8304 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0719 04:34:46.716937    8304 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0719 04:34:46.736936    8304 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl
	I0719 04:34:46.736936    8304 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm
	I0719 04:34:46.736936    8304 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet
	I0719 04:34:47.961315    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 04:34:47.973342    8304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 04:34:47.981760    8304 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0719 04:34:47.981760    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0719 04:34:49.392286    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 04:34:49.403963    8304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 04:34:49.414004    8304 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0719 04:34:49.414479    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0719 04:34:51.192004    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:34:51.217758    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 04:34:51.228974    8304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 04:34:51.236601    8304 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0719 04:34:51.236842    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0719 04:34:52.096677    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0719 04:34:52.115556    8304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0719 04:34:52.150287    8304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 04:34:52.184011    8304 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 04:34:52.239444    8304 ssh_runner.go:195] Run: grep 172.28.175.254	control-plane.minikube.internal$ /etc/hosts
	I0719 04:34:52.245595    8304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.175.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:34:52.279143    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:34:52.488994    8304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:34:52.519075    8304 host.go:66] Checking if "ha-062500" exists ...
	I0719 04:34:52.520356    8304 start.go:317] joinCluster: &{Name:ha-062500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-062500 Namespace:default APIServerHAVIP:172.28.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.168.223 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.171.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:34:52.520666    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0719 04:34:52.520752    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:34:54.724690    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:34:54.724690    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:54.724844    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:34:57.431278    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:34:57.431278    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:34:57.432734    8304 sshutil.go:53] new ssh client: &{IP:172.28.168.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\id_rsa Username:docker}
	I0719 04:34:57.637984    8304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.117142s)
	I0719 04:34:57.637984    8304 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.28.171.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 04:34:57.638173    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token de3dy6.f83qklug2r5n0div --discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-062500-m02 --control-plane --apiserver-advertise-address=172.28.171.55 --apiserver-bind-port=8443"
	I0719 04:35:43.653172    8304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token de3dy6.f83qklug2r5n0div --discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-062500-m02 --control-plane --apiserver-advertise-address=172.28.171.55 --apiserver-bind-port=8443": (46.0144113s)
	I0719 04:35:43.653261    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0719 04:35:44.446397    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-062500-m02 minikube.k8s.io/updated_at=2024_07_19T04_35_44_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-062500 minikube.k8s.io/primary=false
	I0719 04:35:44.619269    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-062500-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0719 04:35:44.770923    8304 start.go:319] duration metric: took 52.2499661s to joinCluster
	I0719 04:35:44.771244    8304 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.28.171.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 04:35:44.772095    8304 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:35:44.776775    8304 out.go:177] * Verifying Kubernetes components...
	I0719 04:35:44.797475    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:35:45.168637    8304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:35:45.200199    8304 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 04:35:45.201025    8304 kapi.go:59] client config for ha-062500: &rest.Config{Host:"https://172.28.175.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-062500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-062500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0719 04:35:45.201188    8304 kubeadm.go:483] Overriding stale ClientConfig host https://172.28.175.254:8443 with https://172.28.168.223:8443
	I0719 04:35:45.201801    8304 node_ready.go:35] waiting up to 6m0s for node "ha-062500-m02" to be "Ready" ...
	I0719 04:35:45.201801    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:45.202346    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:45.202346    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:45.202346    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:45.218315    8304 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0719 04:35:45.710071    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:45.710071    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:45.710071    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:45.710071    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:45.716745    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:35:46.214606    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:46.214898    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:46.214898    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:46.214898    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:46.219381    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:35:46.715582    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:46.715692    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:46.715783    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:46.715783    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:46.723072    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:35:47.207702    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:47.207702    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:47.207702    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:47.207702    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:47.213675    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:47.215151    8304 node_ready.go:53] node "ha-062500-m02" has status "Ready":"False"
	I0719 04:35:47.713046    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:47.713453    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:47.713453    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:47.713512    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:47.719298    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:48.207499    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:48.207614    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:48.207614    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:48.207614    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:48.212852    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:48.717437    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:48.717641    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:48.717641    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:48.717690    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:48.722388    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:35:49.209427    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:49.209427    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:49.209427    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:49.209427    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:49.214319    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:35:49.215251    8304 node_ready.go:53] node "ha-062500-m02" has status "Ready":"False"
	I0719 04:35:49.711671    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:49.711891    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:49.711891    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:49.711891    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:49.716297    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:35:50.204111    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:50.204363    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:50.204363    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:50.204363    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:50.213202    8304 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:35:50.710613    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:50.710613    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:50.710613    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:50.710613    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:50.717494    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:35:51.202100    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:51.202100    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:51.202100    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:51.202100    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:51.207987    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:51.706696    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:51.706782    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:51.706782    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:51.706782    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:51.713182    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:35:51.714755    8304 node_ready.go:53] node "ha-062500-m02" has status "Ready":"False"
	I0719 04:35:52.207754    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:52.207754    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:52.207754    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:52.207754    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:52.228509    8304 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0719 04:35:52.702366    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:52.702366    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:52.702366    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:52.702366    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:52.707944    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:53.207108    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:53.207203    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:53.207203    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:53.207203    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:53.215282    8304 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:35:53.710087    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:53.710087    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:53.710191    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:53.710191    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:53.714840    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:35:53.716061    8304 node_ready.go:53] node "ha-062500-m02" has status "Ready":"False"
	I0719 04:35:54.209916    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:54.210188    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:54.210188    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:54.210188    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:54.215300    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:54.715291    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:54.715590    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:54.715590    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:54.715590    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:54.720382    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:35:55.202693    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:55.202822    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:55.202822    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:55.202822    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:55.208371    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:55.704569    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:55.704702    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:55.704702    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:55.704702    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:55.710143    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:56.204096    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:56.204234    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:56.204234    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:56.204234    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:56.209775    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:56.210655    8304 node_ready.go:53] node "ha-062500-m02" has status "Ready":"False"
	I0719 04:35:56.705230    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:56.705353    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:56.705353    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:56.705353    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:56.713700    8304 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:35:57.208301    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:57.208409    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:57.208409    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:57.208409    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:57.213392    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:35:57.706667    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:57.706913    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:57.706913    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:57.706913    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:57.713423    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:35:58.209636    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:58.209894    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:58.209894    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:58.209894    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:58.215467    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:58.216750    8304 node_ready.go:53] node "ha-062500-m02" has status "Ready":"False"
	I0719 04:35:58.702561    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:58.702561    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:58.702561    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:58.702561    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:58.708083    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:35:59.207508    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:59.207853    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:59.207853    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:59.207853    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:59.218632    8304 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0719 04:35:59.707799    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:35:59.707799    8304 round_trippers.go:469] Request Headers:
	I0719 04:35:59.708088    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:35:59.708088    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:35:59.716734    8304 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:36:00.205102    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:00.205102    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:00.205202    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:00.205202    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:00.210494    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:00.706041    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:00.706121    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:00.706121    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:00.706121    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:00.711370    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:00.712552    8304 node_ready.go:53] node "ha-062500-m02" has status "Ready":"False"
	I0719 04:36:01.206744    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:01.206835    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:01.206835    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:01.206835    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:01.212690    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:01.708054    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:01.708054    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:01.708054    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:01.708194    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:01.714621    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:36:02.209534    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:02.209534    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:02.209534    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:02.209534    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:02.215678    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:36:02.714208    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:02.714208    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:02.714338    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:02.714338    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:02.719683    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:02.720970    8304 node_ready.go:53] node "ha-062500-m02" has status "Ready":"False"
	I0719 04:36:03.216484    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:03.216484    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:03.216484    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:03.216484    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:03.223168    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:03.714863    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:03.715155    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:03.715155    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:03.715155    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:03.721067    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:04.213668    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:04.213737    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:04.213737    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:04.213737    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:04.218988    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:04.710753    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:04.710753    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:04.710753    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:04.710753    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:04.716883    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:36:05.210753    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:05.210753    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:05.210753    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:05.210753    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:05.216594    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:05.217536    8304 node_ready.go:53] node "ha-062500-m02" has status "Ready":"False"
	I0719 04:36:05.702967    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:05.702967    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:05.702967    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:05.703084    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:05.709537    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:36:06.215815    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:06.215815    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:06.215815    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:06.215815    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:06.222651    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:36:06.717486    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:06.717486    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:06.717486    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:06.717486    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:06.723114    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:07.216094    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:07.216094    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:07.216094    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:07.216094    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:07.221681    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:07.222590    8304 node_ready.go:53] node "ha-062500-m02" has status "Ready":"False"
	I0719 04:36:07.715739    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:07.715739    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:07.715739    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:07.715739    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:07.721206    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:08.202556    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:08.202556    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.202556    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.202556    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.207208    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:36:08.209218    8304 node_ready.go:49] node "ha-062500-m02" has status "Ready":"True"
	I0719 04:36:08.209365    8304 node_ready.go:38] duration metric: took 23.0071527s for node "ha-062500-m02" to be "Ready" ...
	I0719 04:36:08.209365    8304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 04:36:08.209601    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods
	I0719 04:36:08.209601    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.209601    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.209691    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.242117    8304 round_trippers.go:574] Response Status: 200 OK in 32 milliseconds
	I0719 04:36:08.252132    8304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jb6nt" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.252132    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jb6nt
	I0719 04:36:08.252132    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.252132    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.252132    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.260873    8304 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:36:08.261892    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:08.261892    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.261892    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.261892    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.275407    8304 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0719 04:36:08.275915    8304 pod_ready.go:92] pod "coredns-7db6d8ff4d-jb6nt" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:08.275915    8304 pod_ready.go:81] duration metric: took 23.7831ms for pod "coredns-7db6d8ff4d-jb6nt" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.275915    8304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jpmb4" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.276560    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jpmb4
	I0719 04:36:08.276560    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.276560    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.276560    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.282646    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:36:08.283139    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:08.283717    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.283717    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.283717    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.294749    8304 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0719 04:36:08.295020    8304 pod_ready.go:92] pod "coredns-7db6d8ff4d-jpmb4" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:08.295020    8304 pod_ready.go:81] duration metric: took 19.1044ms for pod "coredns-7db6d8ff4d-jpmb4" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.295020    8304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.295569    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-062500
	I0719 04:36:08.295569    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.295569    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.295569    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.301985    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:36:08.303555    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:08.303555    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.303555    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.303555    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.308160    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:36:08.309511    8304 pod_ready.go:92] pod "etcd-ha-062500" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:08.309511    8304 pod_ready.go:81] duration metric: took 14.4904ms for pod "etcd-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.309511    8304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.309511    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-062500-m02
	I0719 04:36:08.309511    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.309511    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.309511    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.316942    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:36:08.318163    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:08.318220    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.318277    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.318277    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.321217    8304 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 04:36:08.322246    8304 pod_ready.go:92] pod "etcd-ha-062500-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:08.322246    8304 pod_ready.go:81] duration metric: took 12.7357ms for pod "etcd-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.322246    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.405266    8304 request.go:629] Waited for 82.6105ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-062500
	I0719 04:36:08.405464    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-062500
	I0719 04:36:08.405496    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.405533    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.405533    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.410655    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:08.609122    8304 request.go:629] Waited for 196.3904ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:08.609300    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:08.609374    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.609374    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.609374    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.613530    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:36:08.614448    8304 pod_ready.go:92] pod "kube-apiserver-ha-062500" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:08.614448    8304 pod_ready.go:81] duration metric: took 292.1979ms for pod "kube-apiserver-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.614448    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:08.811671    8304 request.go:629] Waited for 197.2205ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-062500-m02
	I0719 04:36:08.811998    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-062500-m02
	I0719 04:36:08.811998    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:08.811998    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:08.812241    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:08.817283    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:09.015873    8304 request.go:629] Waited for 197.1283ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:09.016143    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:09.016179    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:09.016179    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:09.016179    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:09.025188    8304 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0719 04:36:09.025738    8304 pod_ready.go:92] pod "kube-apiserver-ha-062500-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:09.025782    8304 pod_ready.go:81] duration metric: took 411.3295ms for pod "kube-apiserver-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:09.025782    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:09.217760    8304 request.go:629] Waited for 191.7564ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-062500
	I0719 04:36:09.217837    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-062500
	I0719 04:36:09.217837    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:09.217837    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:09.217923    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:09.223564    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:09.405589    8304 request.go:629] Waited for 179.988ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:09.405883    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:09.405883    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:09.405883    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:09.405883    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:09.411526    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:09.412457    8304 pod_ready.go:92] pod "kube-controller-manager-ha-062500" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:09.412457    8304 pod_ready.go:81] duration metric: took 386.671ms for pod "kube-controller-manager-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:09.412457    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:09.607957    8304 request.go:629] Waited for 195.4058ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-062500-m02
	I0719 04:36:09.608435    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-062500-m02
	I0719 04:36:09.608435    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:09.608435    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:09.608435    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:09.616697    8304 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:36:09.813007    8304 request.go:629] Waited for 195.0688ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:09.813429    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:09.813501    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:09.813501    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:09.813501    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:09.820103    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:36:09.820679    8304 pod_ready.go:92] pod "kube-controller-manager-ha-062500-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:09.820679    8304 pod_ready.go:81] duration metric: took 408.2167ms for pod "kube-controller-manager-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:09.820679    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rtdgs" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:10.016232    8304 request.go:629] Waited for 195.2432ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rtdgs
	I0719 04:36:10.016790    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rtdgs
	I0719 04:36:10.016790    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:10.016790    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:10.016790    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:10.024622    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:36:10.204560    8304 request.go:629] Waited for 178.5899ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:10.204741    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:10.204741    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:10.204741    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:10.204741    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:10.210656    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:10.211083    8304 pod_ready.go:92] pod "kube-proxy-rtdgs" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:10.211083    8304 pod_ready.go:81] duration metric: took 390.3997ms for pod "kube-proxy-rtdgs" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:10.211083    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wv8bn" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:10.408581    8304 request.go:629] Waited for 197.2106ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wv8bn
	I0719 04:36:10.408738    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wv8bn
	I0719 04:36:10.408738    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:10.408738    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:10.408794    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:10.417445    8304 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:36:10.612414    8304 request.go:629] Waited for 193.5144ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:10.613273    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:10.613515    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:10.613515    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:10.613515    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:10.619317    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:10.621022    8304 pod_ready.go:92] pod "kube-proxy-wv8bn" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:10.621077    8304 pod_ready.go:81] duration metric: took 409.9888ms for pod "kube-proxy-wv8bn" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:10.621077    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:10.816150    8304 request.go:629] Waited for 194.9521ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-062500
	I0719 04:36:10.816150    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-062500
	I0719 04:36:10.816150    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:10.816150    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:10.816150    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:10.821728    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:11.004343    8304 request.go:629] Waited for 181.7491ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:11.004519    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:36:11.004519    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:11.004519    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:11.004519    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:11.010254    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:11.011846    8304 pod_ready.go:92] pod "kube-scheduler-ha-062500" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:11.011846    8304 pod_ready.go:81] duration metric: took 390.7024ms for pod "kube-scheduler-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:11.011846    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:11.209740    8304 request.go:629] Waited for 197.6481ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-062500-m02
	I0719 04:36:11.209851    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-062500-m02
	I0719 04:36:11.209851    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:11.210023    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:11.210083    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:11.215922    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:11.412579    8304 request.go:629] Waited for 195.2905ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:11.412807    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:36:11.412807    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:11.412807    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:11.412807    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:11.420217    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:36:11.420943    8304 pod_ready.go:92] pod "kube-scheduler-ha-062500-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:36:11.420943    8304 pod_ready.go:81] duration metric: took 409.0925ms for pod "kube-scheduler-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:36:11.420943    8304 pod_ready.go:38] duration metric: took 3.2115411s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 04:36:11.421480    8304 api_server.go:52] waiting for apiserver process to appear ...
	I0719 04:36:11.435913    8304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:36:11.466287    8304 api_server.go:72] duration metric: took 26.6945589s to wait for apiserver process to appear ...
	I0719 04:36:11.466287    8304 api_server.go:88] waiting for apiserver healthz status ...
	I0719 04:36:11.466394    8304 api_server.go:253] Checking apiserver healthz at https://172.28.168.223:8443/healthz ...
	I0719 04:36:11.476476    8304 api_server.go:279] https://172.28.168.223:8443/healthz returned 200:
	ok
	I0719 04:36:11.476476    8304 round_trippers.go:463] GET https://172.28.168.223:8443/version
	I0719 04:36:11.476476    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:11.476476    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:11.476476    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:11.478371    8304 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 04:36:11.478842    8304 api_server.go:141] control plane version: v1.30.3
	I0719 04:36:11.478842    8304 api_server.go:131] duration metric: took 12.4898ms to wait for apiserver health ...
	I0719 04:36:11.478842    8304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 04:36:11.615780    8304 request.go:629] Waited for 136.6917ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods
	I0719 04:36:11.616046    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods
	I0719 04:36:11.616046    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:11.616046    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:11.616152    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:11.625471    8304 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0719 04:36:11.632540    8304 system_pods.go:59] 17 kube-system pods found
	I0719 04:36:11.632540    8304 system_pods.go:61] "coredns-7db6d8ff4d-jb6nt" [799dd902-ac1e-4264-91b3-18bdfcd3c8d6] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "coredns-7db6d8ff4d-jpmb4" [f08afb24-1862-49cd-9065-fd21c96614ca] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "etcd-ha-062500" [7fcd86be-7022-4c7c-8144-e2537879c108] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "etcd-ha-062500-m02" [d7896def-bce8-4197-8016-90a7e745f68c] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kindnet-sk9jr" [06a7499a-0467-433d-9e65-5352dec711cf] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kindnet-xw86l" [8513df89-57a9-4e7a-b30f-df6c7ef5ed58] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kube-apiserver-ha-062500" [495cdc56-2af6-4ceb-acee-26b9bc09d268] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kube-apiserver-ha-062500-m02" [f880cb8b-d5aa-4141-8031-26951f630b43] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kube-controller-manager-ha-062500" [72ca647c-6a15-4408-9bc7-ba1be775d35a] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kube-controller-manager-ha-062500-m02" [031f15e6-c214-44e4-88f7-f7636f1f4a5e] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kube-proxy-rtdgs" [5c014afc-3ab0-4d20-83b6-adbb9a6133ec] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kube-proxy-wv8bn" [75f8ca14-0f7c-4e85-884c-b55161236c22] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kube-scheduler-ha-062500" [bc127693-7c90-4778-bef4-a9aa231e89a8] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kube-scheduler-ha-062500-m02" [37551193-9128-4afd-9653-1639d1727249] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kube-vip-ha-062500" [87843ee5-6fdf-473a-8818-47b1927340d6] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "kube-vip-ha-062500-m02" [8ce744ae-1492-4359-860f-f7ff13977733] Running
	I0719 04:36:11.632540    8304 system_pods.go:61] "storage-provisioner" [d029a307-143b-4ef5-8619-f06e267d756c] Running
	I0719 04:36:11.632540    8304 system_pods.go:74] duration metric: took 153.6957ms to wait for pod list to return data ...
	I0719 04:36:11.632540    8304 default_sa.go:34] waiting for default service account to be created ...
	I0719 04:36:11.817888    8304 request.go:629] Waited for 185.3462ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/default/serviceaccounts
	I0719 04:36:11.818166    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/default/serviceaccounts
	I0719 04:36:11.818166    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:11.818166    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:11.818166    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:11.822619    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:36:11.823848    8304 default_sa.go:45] found service account: "default"
	I0719 04:36:11.823924    8304 default_sa.go:55] duration metric: took 191.3823ms for default service account to be created ...
	I0719 04:36:11.823924    8304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 04:36:12.004093    8304 request.go:629] Waited for 179.7796ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods
	I0719 04:36:12.004093    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods
	I0719 04:36:12.004408    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:12.004408    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:12.004408    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:12.013695    8304 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:36:12.022167    8304 system_pods.go:86] 17 kube-system pods found
	I0719 04:36:12.022763    8304 system_pods.go:89] "coredns-7db6d8ff4d-jb6nt" [799dd902-ac1e-4264-91b3-18bdfcd3c8d6] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "coredns-7db6d8ff4d-jpmb4" [f08afb24-1862-49cd-9065-fd21c96614ca] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "etcd-ha-062500" [7fcd86be-7022-4c7c-8144-e2537879c108] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "etcd-ha-062500-m02" [d7896def-bce8-4197-8016-90a7e745f68c] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "kindnet-sk9jr" [06a7499a-0467-433d-9e65-5352dec711cf] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "kindnet-xw86l" [8513df89-57a9-4e7a-b30f-df6c7ef5ed58] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "kube-apiserver-ha-062500" [495cdc56-2af6-4ceb-acee-26b9bc09d268] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "kube-apiserver-ha-062500-m02" [f880cb8b-d5aa-4141-8031-26951f630b43] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "kube-controller-manager-ha-062500" [72ca647c-6a15-4408-9bc7-ba1be775d35a] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "kube-controller-manager-ha-062500-m02" [031f15e6-c214-44e4-88f7-f7636f1f4a5e] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "kube-proxy-rtdgs" [5c014afc-3ab0-4d20-83b6-adbb9a6133ec] Running
	I0719 04:36:12.022763    8304 system_pods.go:89] "kube-proxy-wv8bn" [75f8ca14-0f7c-4e85-884c-b55161236c22] Running
	I0719 04:36:12.022895    8304 system_pods.go:89] "kube-scheduler-ha-062500" [bc127693-7c90-4778-bef4-a9aa231e89a8] Running
	I0719 04:36:12.022895    8304 system_pods.go:89] "kube-scheduler-ha-062500-m02" [37551193-9128-4afd-9653-1639d1727249] Running
	I0719 04:36:12.022895    8304 system_pods.go:89] "kube-vip-ha-062500" [87843ee5-6fdf-473a-8818-47b1927340d6] Running
	I0719 04:36:12.022895    8304 system_pods.go:89] "kube-vip-ha-062500-m02" [8ce744ae-1492-4359-860f-f7ff13977733] Running
	I0719 04:36:12.022895    8304 system_pods.go:89] "storage-provisioner" [d029a307-143b-4ef5-8619-f06e267d756c] Running
	I0719 04:36:12.022895    8304 system_pods.go:126] duration metric: took 198.9688ms to wait for k8s-apps to be running ...
	I0719 04:36:12.022895    8304 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 04:36:12.033431    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:36:12.062659    8304 system_svc.go:56] duration metric: took 39.7628ms WaitForService to wait for kubelet
	I0719 04:36:12.062786    8304 kubeadm.go:582] duration metric: took 27.2910511s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:36:12.062786    8304 node_conditions.go:102] verifying NodePressure condition ...
	I0719 04:36:12.210547    8304 request.go:629] Waited for 147.4395ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes
	I0719 04:36:12.210664    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes
	I0719 04:36:12.210664    8304 round_trippers.go:469] Request Headers:
	I0719 04:36:12.210766    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:36:12.210766    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:36:12.216164    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:36:12.217572    8304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 04:36:12.217572    8304 node_conditions.go:123] node cpu capacity is 2
	I0719 04:36:12.217572    8304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 04:36:12.217572    8304 node_conditions.go:123] node cpu capacity is 2
	I0719 04:36:12.217676    8304 node_conditions.go:105] duration metric: took 154.8883ms to run NodePressure ...
	I0719 04:36:12.217676    8304 start.go:241] waiting for startup goroutines ...
	I0719 04:36:12.217742    8304 start.go:255] writing updated cluster config ...
	I0719 04:36:12.222413    8304 out.go:177] 
	I0719 04:36:12.238547    8304 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:36:12.238816    8304 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\config.json ...
	I0719 04:36:12.244980    8304 out.go:177] * Starting "ha-062500-m03" control-plane node in "ha-062500" cluster
	I0719 04:36:12.248953    8304 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 04:36:12.248953    8304 cache.go:56] Caching tarball of preloaded images
	I0719 04:36:12.248953    8304 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 04:36:12.249496    8304 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 04:36:12.249669    8304 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\config.json ...
	I0719 04:36:12.254095    8304 start.go:360] acquireMachinesLock for ha-062500-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 04:36:12.254095    8304 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-062500-m03"
	I0719 04:36:12.254095    8304 start.go:93] Provisioning new machine with config: &{Name:ha-062500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.3 ClusterName:ha-062500 Namespace:default APIServerHAVIP:172.28.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.168.223 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.171.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 04:36:12.254095    8304 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0719 04:36:12.257899    8304 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 04:36:12.259172    8304 start.go:159] libmachine.API.Create for "ha-062500" (driver="hyperv")
	I0719 04:36:12.259172    8304 client.go:168] LocalClient.Create starting
	I0719 04:36:12.259332    8304 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0719 04:36:12.259332    8304 main.go:141] libmachine: Decoding PEM data...
	I0719 04:36:12.259879    8304 main.go:141] libmachine: Parsing certificate...
	I0719 04:36:12.259879    8304 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0719 04:36:12.259879    8304 main.go:141] libmachine: Decoding PEM data...
	I0719 04:36:12.259879    8304 main.go:141] libmachine: Parsing certificate...
	I0719 04:36:12.260446    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0719 04:36:14.208710    8304 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0719 04:36:14.209209    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:14.209313    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0719 04:36:16.006467    8304 main.go:141] libmachine: [stdout =====>] : False
	
	I0719 04:36:16.006467    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:16.006467    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0719 04:36:17.555349    8304 main.go:141] libmachine: [stdout =====>] : True
	
	I0719 04:36:17.556030    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:17.556100    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0719 04:36:21.440207    8304 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0719 04:36:21.440290    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:21.443219    8304 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 04:36:21.895735    8304 main.go:141] libmachine: Creating SSH key...
	I0719 04:36:21.981554    8304 main.go:141] libmachine: Creating VM...
	I0719 04:36:21.981554    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0719 04:36:25.179617    8304 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0719 04:36:25.179780    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:25.179855    8304 main.go:141] libmachine: Using switch "Default Switch"
	I0719 04:36:25.179855    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0719 04:36:27.056358    8304 main.go:141] libmachine: [stdout =====>] : True
	
	I0719 04:36:27.057242    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:27.057242    8304 main.go:141] libmachine: Creating VHD
	I0719 04:36:27.057376    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0719 04:36:31.106456    8304 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 02FB9D75-0A7D-44C9-8DB3-21F70D7B66F0
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0719 04:36:31.106456    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:31.106456    8304 main.go:141] libmachine: Writing magic tar header
	I0719 04:36:31.107446    8304 main.go:141] libmachine: Writing SSH key tar header
	I0719 04:36:31.117449    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0719 04:36:34.495264    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:36:34.495810    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:34.495974    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\disk.vhd' -SizeBytes 20000MB
	I0719 04:36:37.222989    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:36:37.222989    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:37.223127    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-062500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0719 04:36:41.011818    8304 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-062500-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0719 04:36:41.011920    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:41.012001    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-062500-m03 -DynamicMemoryEnabled $false
	I0719 04:36:43.428664    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:36:43.428664    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:43.429652    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-062500-m03 -Count 2
	I0719 04:36:45.727991    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:36:45.727991    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:45.727991    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-062500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\boot2docker.iso'
	I0719 04:36:48.423800    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:36:48.424858    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:48.424858    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-062500-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\disk.vhd'
	I0719 04:36:51.195380    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:36:51.195664    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:51.195664    8304 main.go:141] libmachine: Starting VM...
	I0719 04:36:51.195664    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-062500-m03
	I0719 04:36:54.445105    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:36:54.445105    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:54.445105    8304 main.go:141] libmachine: Waiting for host to start...
	I0719 04:36:54.445799    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:36:56.847637    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:36:56.847830    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:36:56.847912    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:36:59.516916    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:36:59.516916    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:00.518662    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:02.825365    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:02.825365    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:02.825365    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:37:05.467277    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:37:05.468061    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:06.476755    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:08.807598    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:08.807654    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:08.807654    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:37:11.452138    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:37:11.452138    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:12.457214    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:14.782848    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:14.783419    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:14.783419    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:37:17.417851    8304 main.go:141] libmachine: [stdout =====>] : 
	I0719 04:37:17.417851    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:18.423958    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:20.765749    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:20.765749    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:20.765894    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:37:23.404281    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:37:23.404281    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:23.404809    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:25.661786    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:25.661786    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:25.661786    8304 machine.go:94] provisionDockerMachine start ...
	I0719 04:37:25.662745    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:27.938786    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:27.938786    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:27.939691    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:37:30.635832    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:37:30.636117    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:30.641890    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:37:30.652607    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.161.140 22 <nil> <nil>}
	I0719 04:37:30.652607    8304 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 04:37:30.781326    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 04:37:30.781414    8304 buildroot.go:166] provisioning hostname "ha-062500-m03"
	I0719 04:37:30.781488    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:33.005461    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:33.005461    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:33.005461    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:37:35.650548    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:37:35.651158    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:35.656085    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:37:35.656852    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.161.140 22 <nil> <nil>}
	I0719 04:37:35.656852    8304 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-062500-m03 && echo "ha-062500-m03" | sudo tee /etc/hostname
	I0719 04:37:35.808581    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-062500-m03
	
	I0719 04:37:35.808581    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:38.046580    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:38.047154    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:38.047221    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:37:40.688863    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:37:40.689021    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:40.694485    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:37:40.695187    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.161.140 22 <nil> <nil>}
	I0719 04:37:40.695187    8304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-062500-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-062500-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-062500-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 04:37:40.830146    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 04:37:40.830261    8304 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 04:37:40.830294    8304 buildroot.go:174] setting up certificates
	I0719 04:37:40.830320    8304 provision.go:84] configureAuth start
	I0719 04:37:40.830320    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:43.066302    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:43.066302    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:43.066302    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:37:45.680184    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:37:45.680184    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:45.680343    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:47.886215    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:47.886215    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:47.886215    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:37:50.532283    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:37:50.532283    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:50.532370    8304 provision.go:143] copyHostCerts
	I0719 04:37:50.532370    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0719 04:37:50.532370    8304 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0719 04:37:50.532370    8304 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0719 04:37:50.533295    8304 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 04:37:50.534425    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0719 04:37:50.534677    8304 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0719 04:37:50.534765    8304 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0719 04:37:50.535157    8304 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 04:37:50.536213    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0719 04:37:50.536213    8304 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0719 04:37:50.536213    8304 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0719 04:37:50.536831    8304 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 04:37:50.537931    8304 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-062500-m03 san=[127.0.0.1 172.28.161.140 ha-062500-m03 localhost minikube]
	I0719 04:37:50.711408    8304 provision.go:177] copyRemoteCerts
	I0719 04:37:50.721871    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 04:37:50.721871    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:52.930221    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:52.930221    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:52.930221    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:37:55.591827    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:37:55.592133    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:55.592651    8304 sshutil.go:53] new ssh client: &{IP:172.28.161.140 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\id_rsa Username:docker}
	I0719 04:37:55.696752    8304 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.974823s)
	I0719 04:37:55.696752    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0719 04:37:55.697384    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 04:37:55.744928    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0719 04:37:55.744928    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0719 04:37:55.793798    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0719 04:37:55.794291    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 04:37:55.840911    8304 provision.go:87] duration metric: took 15.0104184s to configureAuth
	I0719 04:37:55.840911    8304 buildroot.go:189] setting minikube options for container-runtime
	I0719 04:37:55.841781    8304 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:37:55.841781    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:37:58.074954    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:37:58.074954    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:37:58.074954    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:00.704914    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:38:00.704914    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:00.710701    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:38:00.711453    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.161.140 22 <nil> <nil>}
	I0719 04:38:00.711453    8304 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 04:38:00.836750    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 04:38:00.836750    8304 buildroot.go:70] root file system type: tmpfs
	I0719 04:38:00.837058    8304 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 04:38:00.837153    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:38:03.032084    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:03.032215    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:03.032278    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:05.689246    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:38:05.689246    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:05.694119    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:38:05.695068    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.161.140 22 <nil> <nil>}
	I0719 04:38:05.695068    8304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.168.223"
	Environment="NO_PROXY=172.28.168.223,172.28.171.55"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 04:38:05.849044    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.168.223
	Environment=NO_PROXY=172.28.168.223,172.28.171.55
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 04:38:05.849209    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:38:08.085806    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:08.085806    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:08.086273    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:10.723830    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:38:10.723830    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:10.730087    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:38:10.730682    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.161.140 22 <nil> <nil>}
	I0719 04:38:10.730682    8304 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 04:38:12.996398    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 04:38:12.996512    8304 machine.go:97] duration metric: took 47.3341818s to provisionDockerMachine
	I0719 04:38:12.996512    8304 client.go:171] duration metric: took 2m0.7359517s to LocalClient.Create
	I0719 04:38:12.996512    8304 start.go:167] duration metric: took 2m0.7359517s to libmachine.API.Create "ha-062500"
	I0719 04:38:12.996512    8304 start.go:293] postStartSetup for "ha-062500-m03" (driver="hyperv")
	I0719 04:38:12.996714    8304 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 04:38:13.009645    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 04:38:13.009645    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:38:15.251033    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:15.251033    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:15.251314    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:17.961438    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:38:17.961742    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:17.962219    8304 sshutil.go:53] new ssh client: &{IP:172.28.161.140 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\id_rsa Username:docker}
	I0719 04:38:18.069221    8304 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.0595175s)
	I0719 04:38:18.082042    8304 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 04:38:18.088836    8304 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 04:38:18.088920    8304 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 04:38:18.089517    8304 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 04:38:18.090817    8304 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
	I0719 04:38:18.090900    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /etc/ssl/certs/96042.pem
	I0719 04:38:18.104517    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 04:38:18.124249    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
	I0719 04:38:18.171090    8304 start.go:296] duration metric: took 5.1744s for postStartSetup
	I0719 04:38:18.173555    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:38:20.405346    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:20.405346    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:20.405346    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:23.047973    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:38:23.048169    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:23.048477    8304 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\config.json ...
	I0719 04:38:23.051731    8304 start.go:128] duration metric: took 2m10.7961321s to createHost
	I0719 04:38:23.051817    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:38:25.285354    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:25.285354    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:25.285354    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:27.930325    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:38:27.930325    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:27.937360    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:38:27.938005    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.161.140 22 <nil> <nil>}
	I0719 04:38:27.938005    8304 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 04:38:28.063354    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721363908.068956500
	
	I0719 04:38:28.063354    8304 fix.go:216] guest clock: 1721363908.068956500
	I0719 04:38:28.063354    8304 fix.go:229] Guest: 2024-07-19 04:38:28.0689565 +0000 UTC Remote: 2024-07-19 04:38:23.0518173 +0000 UTC m=+593.907235101 (delta=5.0171392s)
	I0719 04:38:28.063354    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:38:30.328771    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:30.328771    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:30.328771    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:33.027801    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:38:33.027801    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:33.034848    8304 main.go:141] libmachine: Using SSH client type: native
	I0719 04:38:33.035486    8304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.161.140 22 <nil> <nil>}
	I0719 04:38:33.035486    8304 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721363908
	I0719 04:38:33.185397    8304 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 04:38:28 UTC 2024
	
	I0719 04:38:33.185430    8304 fix.go:236] clock set: Fri Jul 19 04:38:28 UTC 2024
	 (err=<nil>)
	I0719 04:38:33.185430    8304 start.go:83] releasing machines lock for "ha-062500-m03", held for 2m20.9297137s
	I0719 04:38:33.185703    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:38:35.476929    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:35.477161    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:35.477227    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:38.230311    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:38:38.230311    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:38.236514    8304 out.go:177] * Found network options:
	I0719 04:38:38.242556    8304 out.go:177]   - NO_PROXY=172.28.168.223,172.28.171.55
	W0719 04:38:38.248877    8304 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 04:38:38.248877    8304 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 04:38:38.250878    8304 out.go:177]   - NO_PROXY=172.28.168.223,172.28.171.55
	W0719 04:38:38.253872    8304 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 04:38:38.253872    8304 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 04:38:38.255286    8304 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 04:38:38.255398    8304 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 04:38:38.258699    8304 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 04:38:38.258699    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:38:38.269746    8304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 04:38:38.269746    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500-m03 ).state
	I0719 04:38:40.606197    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:40.606197    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:40.606933    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:40.614759    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:40.614759    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:40.614954    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500-m03 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:43.359498    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:38:43.359498    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:43.360475    8304 sshutil.go:53] new ssh client: &{IP:172.28.161.140 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\id_rsa Username:docker}
	I0719 04:38:43.385920    8304 main.go:141] libmachine: [stdout =====>] : 172.28.161.140
	
	I0719 04:38:43.386773    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:43.387526    8304 sshutil.go:53] new ssh client: &{IP:172.28.161.140 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500-m03\id_rsa Username:docker}
	I0719 04:38:43.454475    8304 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1846697s)
	W0719 04:38:43.454475    8304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 04:38:43.466246    8304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 04:38:43.473292    8304 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.2145332s)
	W0719 04:38:43.473417    8304 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 04:38:43.504551    8304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 04:38:43.504551    8304 start.go:495] detecting cgroup driver to use...
	I0719 04:38:43.504551    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:38:43.553113    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 04:38:43.585639    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0719 04:38:43.593796    8304 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 04:38:43.593796    8304 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 04:38:43.611903    8304 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 04:38:43.628556    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 04:38:43.660199    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 04:38:43.696203    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 04:38:43.731386    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 04:38:43.763909    8304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 04:38:43.795007    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 04:38:43.828759    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 04:38:43.861372    8304 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 04:38:43.898876    8304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 04:38:43.930528    8304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 04:38:43.961084    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:38:44.169583    8304 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 04:38:44.208988    8304 start.go:495] detecting cgroup driver to use...
	I0719 04:38:44.225381    8304 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 04:38:44.264349    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:38:44.301613    8304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 04:38:44.358726    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 04:38:44.397618    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 04:38:44.432281    8304 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 04:38:44.496076    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 04:38:44.520573    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 04:38:44.568753    8304 ssh_runner.go:195] Run: which cri-dockerd
	I0719 04:38:44.588944    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 04:38:44.606966    8304 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 04:38:44.657049    8304 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 04:38:44.858832    8304 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 04:38:45.051566    8304 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 04:38:45.052488    8304 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 04:38:45.096717    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:38:45.298529    8304 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 04:38:47.900869    8304 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6013568s)
	I0719 04:38:47.912673    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 04:38:47.949716    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 04:38:47.983804    8304 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 04:38:48.186483    8304 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 04:38:48.385418    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:38:48.595727    8304 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 04:38:48.635759    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 04:38:48.667764    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:38:48.880626    8304 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 04:38:48.996872    8304 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 04:38:49.008915    8304 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 04:38:49.017858    8304 start.go:563] Will wait 60s for crictl version
	I0719 04:38:49.028640    8304 ssh_runner.go:195] Run: which crictl
	I0719 04:38:49.046514    8304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 04:38:49.111006    8304 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 04:38:49.118990    8304 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 04:38:49.162268    8304 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 04:38:49.197161    8304 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0719 04:38:49.201296    8304 out.go:177]   - env NO_PROXY=172.28.168.223
	I0719 04:38:49.203915    8304 out.go:177]   - env NO_PROXY=172.28.168.223,172.28.171.55
	I0719 04:38:49.206346    8304 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0719 04:38:49.215294    8304 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0719 04:38:49.215708    8304 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0719 04:38:49.215708    8304 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0719 04:38:49.215708    8304 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:e9:18 Flags:up|broadcast|multicast|running}
	I0719 04:38:49.219359    8304 ip.go:210] interface addr: fe80::1dc5:162d:cec2:b9bd/64
	I0719 04:38:49.219382    8304 ip.go:210] interface addr: 172.28.160.1/20
	I0719 04:38:49.231133    8304 ssh_runner.go:195] Run: grep 172.28.160.1	host.minikube.internal$ /etc/hosts
	I0719 04:38:49.237974    8304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:38:49.264356    8304 mustload.go:65] Loading cluster: ha-062500
	I0719 04:38:49.265631    8304 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:38:49.266919    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:38:51.447338    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:51.447558    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:51.447558    8304 host.go:66] Checking if "ha-062500" exists ...
	I0719 04:38:51.448435    8304 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500 for IP: 172.28.161.140
	I0719 04:38:51.448435    8304 certs.go:194] generating shared ca certs ...
	I0719 04:38:51.448493    8304 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:38:51.449395    8304 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0719 04:38:51.449927    8304 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0719 04:38:51.449982    8304 certs.go:256] generating profile certs ...
	I0719 04:38:51.451093    8304 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\client.key
	I0719 04:38:51.451295    8304 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.6ecf5f11
	I0719 04:38:51.451521    8304 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.6ecf5f11 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.168.223 172.28.171.55 172.28.161.140 172.28.175.254]
	I0719 04:38:51.686772    8304 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.6ecf5f11 ...
	I0719 04:38:51.686772    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.6ecf5f11: {Name:mk966cd9b89e774069784355cc8da1117973bc8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:38:51.688645    8304 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.6ecf5f11 ...
	I0719 04:38:51.688645    8304 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.6ecf5f11: {Name:mk3473ef6e1b5ac680e036b33607771f3b5c536e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 04:38:51.689467    8304 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt.6ecf5f11 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt
	I0719 04:38:51.701233    8304 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key.6ecf5f11 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key
	I0719 04:38:51.703293    8304 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key
	I0719 04:38:51.703293    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 04:38:51.703441    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0719 04:38:51.703441    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 04:38:51.703441    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 04:38:51.703441    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 04:38:51.704085    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 04:38:51.712298    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 04:38:51.712508    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 04:38:51.712508    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem (1338 bytes)
	W0719 04:38:51.712508    8304 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604_empty.pem, impossibly tiny 0 bytes
	I0719 04:38:51.713228    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0719 04:38:51.713628    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0719 04:38:51.713953    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0719 04:38:51.714016    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0719 04:38:51.714298    8304 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem (1708 bytes)
	I0719 04:38:51.714298    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem -> /usr/share/ca-certificates/9604.pem
	I0719 04:38:51.714298    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /usr/share/ca-certificates/96042.pem
	I0719 04:38:51.714298    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:38:51.715291    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:38:53.922590    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:38:53.922590    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:53.922590    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:38:56.549638    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:38:56.549638    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:38:56.550520    8304 sshutil.go:53] new ssh client: &{IP:172.28.168.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\id_rsa Username:docker}
	I0719 04:38:56.651768    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0719 04:38:56.659638    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0719 04:38:56.691129    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0719 04:38:56.698104    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0719 04:38:56.729716    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0719 04:38:56.736877    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0719 04:38:56.769154    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0719 04:38:56.778242    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0719 04:38:56.811097    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0719 04:38:56.818057    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0719 04:38:56.848295    8304 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0719 04:38:56.854647    8304 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0719 04:38:56.874190    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 04:38:56.922285    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 04:38:56.966116    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 04:38:57.012779    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 04:38:57.065835    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0719 04:38:57.113320    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 04:38:57.160981    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 04:38:57.209129    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-062500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 04:38:57.258436    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem --> /usr/share/ca-certificates/9604.pem (1338 bytes)
	I0719 04:38:57.308974    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /usr/share/ca-certificates/96042.pem (1708 bytes)
	I0719 04:38:57.355022    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 04:38:57.400532    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0719 04:38:57.432231    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0719 04:38:57.461470    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0719 04:38:57.494650    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0719 04:38:57.526960    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0719 04:38:57.560812    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0719 04:38:57.595401    8304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0719 04:38:57.659492    8304 ssh_runner.go:195] Run: openssl version
	I0719 04:38:57.681800    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9604.pem && ln -fs /usr/share/ca-certificates/9604.pem /etc/ssl/certs/9604.pem"
	I0719 04:38:57.713786    8304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9604.pem
	I0719 04:38:57.721069    8304 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:46 /usr/share/ca-certificates/9604.pem
	I0719 04:38:57.732159    8304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9604.pem
	I0719 04:38:57.752790    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9604.pem /etc/ssl/certs/51391683.0"
	I0719 04:38:57.783730    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96042.pem && ln -fs /usr/share/ca-certificates/96042.pem /etc/ssl/certs/96042.pem"
	I0719 04:38:57.816130    8304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96042.pem
	I0719 04:38:57.823167    8304 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:46 /usr/share/ca-certificates/96042.pem
	I0719 04:38:57.834762    8304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96042.pem
	I0719 04:38:57.856488    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96042.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 04:38:57.887133    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 04:38:57.920853    8304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:38:57.928831    8304 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:38:57.939606    8304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 04:38:57.962288    8304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 04:38:57.995527    8304 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 04:38:58.001061    8304 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 04:38:58.002092    8304 kubeadm.go:934] updating node {m03 172.28.161.140 8443 v1.30.3 docker true true} ...
	I0719 04:38:58.002289    8304 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-062500-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.161.140
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-062500 Namespace:default APIServerHAVIP:172.28.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 04:38:58.002442    8304 kube-vip.go:115] generating kube-vip config ...
	I0719 04:38:58.015230    8304 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0719 04:38:58.047099    8304 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0719 04:38:58.047099    8304 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.28.175.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0719 04:38:58.062722    8304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 04:38:58.083989    8304 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0719 04:38:58.096545    8304 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0719 04:38:58.118660    8304 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0719 04:38:58.118660    8304 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0719 04:38:58.118660    8304 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0719 04:38:58.118660    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 04:38:58.118660    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 04:38:58.134489    8304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 04:38:58.137435    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:38:58.140121    8304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 04:38:58.142321    8304 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0719 04:38:58.143251    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0719 04:38:58.184989    8304 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0719 04:38:58.184989    8304 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 04:38:58.184989    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0719 04:38:58.197539    8304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 04:38:58.252513    8304 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0719 04:38:58.252513    8304 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0719 04:38:59.529561    8304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0719 04:38:59.548178    8304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0719 04:38:59.580235    8304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 04:38:59.612351    8304 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0719 04:38:59.657494    8304 ssh_runner.go:195] Run: grep 172.28.175.254	control-plane.minikube.internal$ /etc/hosts
	I0719 04:38:59.663526    8304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.175.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 04:38:59.699571    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:38:59.909515    8304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:38:59.942346    8304 host.go:66] Checking if "ha-062500" exists ...
	I0719 04:38:59.942346    8304 start.go:317] joinCluster: &{Name:ha-062500 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 Clust
erName:ha-062500 Namespace:default APIServerHAVIP:172.28.175.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.168.223 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.171.55 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.28.161.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 04:38:59.943546    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0719 04:38:59.943763    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-062500 ).state
	I0719 04:39:02.157605    8304 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 04:39:02.157605    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:39:02.157605    8304 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-062500 ).networkadapters[0]).ipaddresses[0]
	I0719 04:39:04.830211    8304 main.go:141] libmachine: [stdout =====>] : 172.28.168.223
	
	I0719 04:39:04.830295    8304 main.go:141] libmachine: [stderr =====>] : 
	I0719 04:39:04.830866    8304 sshutil.go:53] new ssh client: &{IP:172.28.168.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-062500\id_rsa Username:docker}
	I0719 04:39:05.049950    8304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1063454s)
	I0719 04:39:05.049950    8304 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.28.161.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 04:39:05.049950    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token whbjti.l37hrpvm5f3lggpj --discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-062500-m03 --control-plane --apiserver-advertise-address=172.28.161.140 --apiserver-bind-port=8443"
	I0719 04:39:49.278256    8304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token whbjti.l37hrpvm5f3lggpj --discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-062500-m03 --control-plane --apiserver-advertise-address=172.28.161.140 --apiserver-bind-port=8443": (44.2277202s)
	I0719 04:39:49.278328    8304 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0719 04:39:50.368201    8304 ssh_runner.go:235] Completed: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet": (1.0897909s)
	I0719 04:39:50.379636    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-062500-m03 minikube.k8s.io/updated_at=2024_07_19T04_39_50_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=ha-062500 minikube.k8s.io/primary=false
	I0719 04:39:50.576587    8304 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-062500-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0719 04:39:50.738179    8304 start.go:319] duration metric: took 50.7952483s to joinCluster
	I0719 04:39:50.738179    8304 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.28.161.140 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 04:39:50.740276    8304 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 04:39:50.742006    8304 out.go:177] * Verifying Kubernetes components...
	I0719 04:39:50.756690    8304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 04:39:51.095841    8304 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 04:39:51.124901    8304 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 04:39:51.125822    8304 kapi.go:59] client config for ha-062500: &rest.Config{Host:"https://172.28.175.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-062500\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-062500\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0719 04:39:51.125822    8304 kubeadm.go:483] Overriding stale ClientConfig host https://172.28.175.254:8443 with https://172.28.168.223:8443
	I0719 04:39:51.126829    8304 node_ready.go:35] waiting up to 6m0s for node "ha-062500-m03" to be "Ready" ...
	I0719 04:39:51.126829    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:51.126829    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:51.126829    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:51.126829    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:51.139862    8304 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0719 04:39:51.641304    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:51.641304    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:51.641304    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:51.641304    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:51.646338    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:39:52.131962    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:52.131962    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:52.131962    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:52.131962    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:52.139536    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:39:52.639176    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:52.639176    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:52.639176    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:52.639176    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:52.646212    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:39:53.130811    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:53.130884    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:53.130884    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:53.130929    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:53.136389    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:39:53.137781    8304 node_ready.go:53] node "ha-062500-m03" has status "Ready":"False"
	I0719 04:39:53.637534    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:53.637599    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:53.637599    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:53.637599    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:53.642076    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:39:54.131097    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:54.131097    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:54.131097    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:54.131097    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:54.136649    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:39:54.639712    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:54.639890    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:54.639890    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:54.639890    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:54.644945    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:39:55.129396    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:55.129396    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:55.129396    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:55.129396    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:55.133412    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:39:55.639624    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:55.639624    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:55.639624    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:55.639624    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:55.645227    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:39:55.646755    8304 node_ready.go:53] node "ha-062500-m03" has status "Ready":"False"
	I0719 04:39:56.128599    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:56.128599    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:56.128704    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:56.128704    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:56.133953    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:39:56.636140    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:56.636140    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:56.636247    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:56.636247    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:56.639840    8304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:39:57.128049    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:57.128091    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:57.128091    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:57.128165    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:57.135510    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:39:57.628505    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:57.628793    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:57.628793    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:57.628793    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:57.636233    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:39:58.132905    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:58.132905    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:58.132905    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:58.132905    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:58.137734    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:39:58.139217    8304 node_ready.go:53] node "ha-062500-m03" has status "Ready":"False"
	I0719 04:39:58.640440    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:58.640582    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:58.640582    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:58.640582    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:58.646167    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:39:59.127507    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:59.127571    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:59.127571    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:59.127571    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:59.133641    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:39:59.634933    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:39:59.634933    8304 round_trippers.go:469] Request Headers:
	I0719 04:39:59.635028    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:39:59.635028    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:39:59.639479    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:00.134968    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:00.135114    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:00.135114    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:00.135114    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:00.139920    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:00.140755    8304 node_ready.go:53] node "ha-062500-m03" has status "Ready":"False"
	I0719 04:40:00.637454    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:00.637526    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:00.637526    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:00.637526    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:00.642173    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:01.137538    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:01.137538    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:01.137538    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:01.137538    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:01.142133    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:01.637400    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:01.637507    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:01.637507    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:01.637507    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:01.641959    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:02.139334    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:02.139464    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:02.139464    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:02.139464    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:02.144939    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:02.145528    8304 node_ready.go:53] node "ha-062500-m03" has status "Ready":"False"
	I0719 04:40:02.641970    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:02.642118    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:02.642118    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:02.642118    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:02.646555    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:03.128341    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:03.128490    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:03.128547    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:03.128547    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:03.132323    8304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:40:03.629311    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:03.629311    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:03.629311    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:03.629423    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:03.636321    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:40:04.132070    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:04.132275    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:04.132275    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:04.132275    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:04.137074    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:04.633389    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:04.633389    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:04.633389    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:04.633389    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:04.638394    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:04.639393    8304 node_ready.go:53] node "ha-062500-m03" has status "Ready":"False"
	I0719 04:40:05.137956    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:05.137956    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:05.137956    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:05.137956    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:05.145383    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:40:05.635144    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:05.635144    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:05.635144    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:05.635144    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:05.640774    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:06.136827    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:06.136878    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:06.136878    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:06.136878    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:06.141650    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:06.633644    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:06.633888    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:06.633888    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:06.633888    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:06.639156    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:06.639864    8304 node_ready.go:53] node "ha-062500-m03" has status "Ready":"False"
	I0719 04:40:07.133136    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:07.133136    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:07.133136    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:07.133136    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:07.138541    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:07.633315    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:07.633315    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:07.633315    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:07.633315    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:07.638963    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:08.132679    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:08.132788    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:08.132788    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:08.132788    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:08.137681    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:08.632870    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:08.632870    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:08.632996    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:08.632996    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:08.649964    8304 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0719 04:40:08.650751    8304 node_ready.go:53] node "ha-062500-m03" has status "Ready":"False"
	I0719 04:40:09.131421    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:09.131421    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.131517    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.131517    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.134981    8304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:40:09.137897    8304 node_ready.go:49] node "ha-062500-m03" has status "Ready":"True"
	I0719 04:40:09.138442    8304 node_ready.go:38] duration metric: took 18.0114054s for node "ha-062500-m03" to be "Ready" ...
	I0719 04:40:09.138508    8304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 04:40:09.138727    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods
	I0719 04:40:09.138757    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.138757    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.138757    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.148730    8304 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0719 04:40:09.158711    8304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jb6nt" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.158711    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jb6nt
	I0719 04:40:09.158711    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.158711    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.158711    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.175309    8304 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0719 04:40:09.176376    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:09.176439    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.176439    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.176439    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.184963    8304 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:40:09.185707    8304 pod_ready.go:92] pod "coredns-7db6d8ff4d-jb6nt" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:09.185707    8304 pod_ready.go:81] duration metric: took 26.9957ms for pod "coredns-7db6d8ff4d-jb6nt" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.185707    8304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jpmb4" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.186302    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jpmb4
	I0719 04:40:09.186302    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.186302    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.186302    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.190146    8304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:40:09.191198    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:09.191198    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.191198    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.191198    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.196134    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:09.196441    8304 pod_ready.go:92] pod "coredns-7db6d8ff4d-jpmb4" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:09.196441    8304 pod_ready.go:81] duration metric: took 10.7332ms for pod "coredns-7db6d8ff4d-jpmb4" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.196441    8304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.196441    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-062500
	I0719 04:40:09.196441    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.196441    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.196441    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.200639    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:09.201410    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:09.201410    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.201410    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.201410    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.205055    8304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:40:09.206020    8304 pod_ready.go:92] pod "etcd-ha-062500" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:09.206020    8304 pod_ready.go:81] duration metric: took 9.5793ms for pod "etcd-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.206020    8304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.206020    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-062500-m02
	I0719 04:40:09.206020    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.206020    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.206020    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.210039    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:09.210741    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:40:09.211544    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.211647    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.211647    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.215378    8304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:40:09.216189    8304 pod_ready.go:92] pod "etcd-ha-062500-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:09.216189    8304 pod_ready.go:81] duration metric: took 10.1686ms for pod "etcd-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.216189    8304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-062500-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.336183    8304 request.go:629] Waited for 119.7862ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-062500-m03
	I0719 04:40:09.336265    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-062500-m03
	I0719 04:40:09.336265    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.336265    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.336265    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.340645    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:09.541132    8304 request.go:629] Waited for 199.0002ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:09.541497    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:09.541497    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.541497    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.541497    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.546397    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:09.547855    8304 pod_ready.go:92] pod "etcd-ha-062500-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:09.547855    8304 pod_ready.go:81] duration metric: took 331.6624ms for pod "etcd-ha-062500-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.547923    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.745581    8304 request.go:629] Waited for 197.5616ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-062500
	I0719 04:40:09.745581    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-062500
	I0719 04:40:09.745581    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.745581    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.745581    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.751621    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:40:09.933226    8304 request.go:629] Waited for 180.6563ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:09.933676    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:09.933676    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:09.933676    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:09.933794    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:09.939016    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:09.940358    8304 pod_ready.go:92] pod "kube-apiserver-ha-062500" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:09.940465    8304 pod_ready.go:81] duration metric: took 392.5372ms for pod "kube-apiserver-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:09.940465    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:10.136761    8304 request.go:629] Waited for 195.6804ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-062500-m02
	I0719 04:40:10.136834    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-062500-m02
	I0719 04:40:10.136897    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:10.136897    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:10.136897    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:10.141687    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:10.341312    8304 request.go:629] Waited for 198.3379ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:40:10.341312    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:40:10.341312    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:10.341312    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:10.341312    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:10.346525    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:10.347940    8304 pod_ready.go:92] pod "kube-apiserver-ha-062500-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:10.347992    8304 pod_ready.go:81] duration metric: took 407.5219ms for pod "kube-apiserver-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:10.348041    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-062500-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:10.545529    8304 request.go:629] Waited for 197.276ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-062500-m03
	I0719 04:40:10.545649    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-062500-m03
	I0719 04:40:10.545649    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:10.545649    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:10.545866    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:10.553646    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:40:10.735379    8304 request.go:629] Waited for 180.5133ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:10.735563    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:10.735563    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:10.735563    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:10.735563    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:10.741052    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:10.742432    8304 pod_ready.go:92] pod "kube-apiserver-ha-062500-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:10.742471    8304 pod_ready.go:81] duration metric: took 394.4261ms for pod "kube-apiserver-ha-062500-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:10.742471    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:10.939057    8304 request.go:629] Waited for 196.3292ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-062500
	I0719 04:40:10.939057    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-062500
	I0719 04:40:10.939057    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:10.939057    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:10.939057    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:10.944726    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:11.144364    8304 request.go:629] Waited for 198.7815ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:11.144364    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:11.144364    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:11.144364    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:11.144364    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:11.148808    8304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:40:11.149098    8304 pod_ready.go:92] pod "kube-controller-manager-ha-062500" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:11.149098    8304 pod_ready.go:81] duration metric: took 406.6223ms for pod "kube-controller-manager-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:11.149098    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:11.346399    8304 request.go:629] Waited for 196.7667ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-062500-m02
	I0719 04:40:11.346399    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-062500-m02
	I0719 04:40:11.346848    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:11.346965    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:11.346965    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:11.351770    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:11.533560    8304 request.go:629] Waited for 179.8545ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:40:11.533765    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:40:11.533901    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:11.533901    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:11.533901    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:11.539104    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:11.541004    8304 pod_ready.go:92] pod "kube-controller-manager-ha-062500-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:11.541004    8304 pod_ready.go:81] duration metric: took 391.9013ms for pod "kube-controller-manager-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:11.541058    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-062500-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:11.737507    8304 request.go:629] Waited for 196.3797ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-062500-m03
	I0719 04:40:11.737745    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-062500-m03
	I0719 04:40:11.737745    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:11.737898    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:11.737898    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:11.745609    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:40:11.942973    8304 request.go:629] Waited for 195.7217ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:11.943227    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:11.943305    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:11.943330    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:11.943330    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:11.947548    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:11.948837    8304 pod_ready.go:92] pod "kube-controller-manager-ha-062500-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:11.948970    8304 pod_ready.go:81] duration metric: took 407.9079ms for pod "kube-controller-manager-ha-062500-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:11.949065    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-g7z8c" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:12.131684    8304 request.go:629] Waited for 182.3278ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7z8c
	I0719 04:40:12.131938    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-g7z8c
	I0719 04:40:12.132014    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:12.132376    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:12.132376    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:12.140460    8304 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 04:40:12.333926    8304 request.go:629] Waited for 192.1163ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:12.334228    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:12.334228    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:12.334228    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:12.334228    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:12.338994    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:12.340574    8304 pod_ready.go:92] pod "kube-proxy-g7z8c" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:12.340574    8304 pod_ready.go:81] duration metric: took 391.5047ms for pod "kube-proxy-g7z8c" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:12.340574    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rtdgs" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:12.536974    8304 request.go:629] Waited for 196.2907ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rtdgs
	I0719 04:40:12.536974    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rtdgs
	I0719 04:40:12.537204    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:12.537228    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:12.537228    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:12.542792    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:12.739958    8304 request.go:629] Waited for 195.5941ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:40:12.740099    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:40:12.740099    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:12.740099    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:12.740099    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:12.743715    8304 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 04:40:12.745308    8304 pod_ready.go:92] pod "kube-proxy-rtdgs" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:12.745308    8304 pod_ready.go:81] duration metric: took 404.7298ms for pod "kube-proxy-rtdgs" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:12.745308    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wv8bn" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:12.943698    8304 request.go:629] Waited for 198.3873ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wv8bn
	I0719 04:40:12.944006    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-wv8bn
	I0719 04:40:12.944006    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:12.944059    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:12.944076    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:12.948347    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:13.132346    8304 request.go:629] Waited for 181.6142ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:13.132613    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:13.132613    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:13.132684    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:13.132684    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:13.136808    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:13.137808    8304 pod_ready.go:92] pod "kube-proxy-wv8bn" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:13.137871    8304 pod_ready.go:81] duration metric: took 392.5579ms for pod "kube-proxy-wv8bn" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:13.137871    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:13.336477    8304 request.go:629] Waited for 198.4057ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-062500
	I0719 04:40:13.336669    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-062500
	I0719 04:40:13.336817    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:13.336817    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:13.336817    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:13.341759    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:13.539783    8304 request.go:629] Waited for 197.7434ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:13.539783    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500
	I0719 04:40:13.539783    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:13.539783    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:13.539783    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:13.545083    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:13.546724    8304 pod_ready.go:92] pod "kube-scheduler-ha-062500" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:13.546724    8304 pod_ready.go:81] duration metric: took 408.849ms for pod "kube-scheduler-ha-062500" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:13.546724    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:13.742005    8304 request.go:629] Waited for 194.9275ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-062500-m02
	I0719 04:40:13.742156    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-062500-m02
	I0719 04:40:13.742156    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:13.742156    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:13.742156    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:13.747811    8304 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 04:40:13.944527    8304 request.go:629] Waited for 195.9958ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:40:13.944702    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m02
	I0719 04:40:13.944702    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:13.944702    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:13.944702    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:13.951465    8304 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 04:40:13.953931    8304 pod_ready.go:92] pod "kube-scheduler-ha-062500-m02" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:13.953931    8304 pod_ready.go:81] duration metric: took 407.202ms for pod "kube-scheduler-ha-062500-m02" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:13.953931    8304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-062500-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:14.133774    8304 request.go:629] Waited for 179.4456ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-062500-m03
	I0719 04:40:14.133932    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-062500-m03
	I0719 04:40:14.133932    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:14.133998    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:14.133998    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:14.141811    8304 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 04:40:14.335021    8304 request.go:629] Waited for 191.3587ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:14.335617    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes/ha-062500-m03
	I0719 04:40:14.335617    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:14.335617    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:14.335617    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:14.340533    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:14.341649    8304 pod_ready.go:92] pod "kube-scheduler-ha-062500-m03" in "kube-system" namespace has status "Ready":"True"
	I0719 04:40:14.341649    8304 pod_ready.go:81] duration metric: took 387.6486ms for pod "kube-scheduler-ha-062500-m03" in "kube-system" namespace to be "Ready" ...
	I0719 04:40:14.341717    8304 pod_ready.go:38] duration metric: took 5.2030838s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 04:40:14.341717    8304 api_server.go:52] waiting for apiserver process to appear ...
	I0719 04:40:14.353430    8304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 04:40:14.382324    8304 api_server.go:72] duration metric: took 23.6427717s to wait for apiserver process to appear ...
	I0719 04:40:14.382324    8304 api_server.go:88] waiting for apiserver healthz status ...
	I0719 04:40:14.382422    8304 api_server.go:253] Checking apiserver healthz at https://172.28.168.223:8443/healthz ...
	I0719 04:40:14.398288    8304 api_server.go:279] https://172.28.168.223:8443/healthz returned 200:
	ok
	I0719 04:40:14.398288    8304 round_trippers.go:463] GET https://172.28.168.223:8443/version
	I0719 04:40:14.398288    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:14.398288    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:14.398288    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:14.399133    8304 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 04:40:14.400262    8304 api_server.go:141] control plane version: v1.30.3
	I0719 04:40:14.400341    8304 api_server.go:131] duration metric: took 18.0169ms to wait for apiserver health ...
	I0719 04:40:14.400341    8304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 04:40:14.539806    8304 request.go:629] Waited for 139.1576ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods
	I0719 04:40:14.539806    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods
	I0719 04:40:14.540016    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:14.540016    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:14.540016    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:14.549336    8304 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0719 04:40:14.560637    8304 system_pods.go:59] 24 kube-system pods found
	I0719 04:40:14.560637    8304 system_pods.go:61] "coredns-7db6d8ff4d-jb6nt" [799dd902-ac1e-4264-91b3-18bdfcd3c8d6] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "coredns-7db6d8ff4d-jpmb4" [f08afb24-1862-49cd-9065-fd21c96614ca] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "etcd-ha-062500" [7fcd86be-7022-4c7c-8144-e2537879c108] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "etcd-ha-062500-m02" [d7896def-bce8-4197-8016-90a7e745f68c] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "etcd-ha-062500-m03" [f90e665c-fb9e-48b1-abcc-dc990ca0a31b] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kindnet-g9b42" [7c244eed-a81b-4088-adfa-bcdccd3cb4f0] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kindnet-sk9jr" [06a7499a-0467-433d-9e65-5352dec711cf] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kindnet-xw86l" [8513df89-57a9-4e7a-b30f-df6c7ef5ed58] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-apiserver-ha-062500" [495cdc56-2af6-4ceb-acee-26b9bc09d268] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-apiserver-ha-062500-m02" [f880cb8b-d5aa-4141-8031-26951f630b43] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-apiserver-ha-062500-m03" [29968640-2d8b-4694-8b0a-d6cfaaa20cdc] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-controller-manager-ha-062500" [72ca647c-6a15-4408-9bc7-ba1be775d35a] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-controller-manager-ha-062500-m02" [031f15e6-c214-44e4-88f7-f7636f1f4a5e] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-controller-manager-ha-062500-m03" [33115099-6fd3-4486-a359-ab11c68c4f0e] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-proxy-g7z8c" [a8637650-ff75-4192-90ec-acfc39f14a7f] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-proxy-rtdgs" [5c014afc-3ab0-4d20-83b6-adbb9a6133ec] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-proxy-wv8bn" [75f8ca14-0f7c-4e85-884c-b55161236c22] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-scheduler-ha-062500" [bc127693-7c90-4778-bef4-a9aa231e89a8] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-scheduler-ha-062500-m02" [37551193-9128-4afd-9653-1639d1727249] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-scheduler-ha-062500-m03" [01ce36f4-8c3e-4bd7-aa4f-230aa4273049] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-vip-ha-062500" [87843ee5-6fdf-473a-8818-47b1927340d6] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-vip-ha-062500-m02" [8ce744ae-1492-4359-860f-f7ff13977733] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "kube-vip-ha-062500-m03" [30925675-f944-440d-a0b5-a8356bd0297b] Running
	I0719 04:40:14.560637    8304 system_pods.go:61] "storage-provisioner" [d029a307-143b-4ef5-8619-f06e267d756c] Running
	I0719 04:40:14.560637    8304 system_pods.go:74] duration metric: took 160.2936ms to wait for pod list to return data ...
	I0719 04:40:14.560637    8304 default_sa.go:34] waiting for default service account to be created ...
	I0719 04:40:14.742154    8304 request.go:629] Waited for 181.515ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/default/serviceaccounts
	I0719 04:40:14.742154    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/default/serviceaccounts
	I0719 04:40:14.742154    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:14.742154    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:14.742154    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:14.746655    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:14.747615    8304 default_sa.go:45] found service account: "default"
	I0719 04:40:14.747615    8304 default_sa.go:55] duration metric: took 186.9759ms for default service account to be created ...
	I0719 04:40:14.747677    8304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 04:40:14.944354    8304 request.go:629] Waited for 196.3359ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods
	I0719 04:40:14.944410    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/namespaces/kube-system/pods
	I0719 04:40:14.944410    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:14.944410    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:14.944410    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:14.954449    8304 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0719 04:40:14.965143    8304 system_pods.go:86] 24 kube-system pods found
	I0719 04:40:14.965143    8304 system_pods.go:89] "coredns-7db6d8ff4d-jb6nt" [799dd902-ac1e-4264-91b3-18bdfcd3c8d6] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "coredns-7db6d8ff4d-jpmb4" [f08afb24-1862-49cd-9065-fd21c96614ca] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "etcd-ha-062500" [7fcd86be-7022-4c7c-8144-e2537879c108] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "etcd-ha-062500-m02" [d7896def-bce8-4197-8016-90a7e745f68c] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "etcd-ha-062500-m03" [f90e665c-fb9e-48b1-abcc-dc990ca0a31b] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "kindnet-g9b42" [7c244eed-a81b-4088-adfa-bcdccd3cb4f0] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "kindnet-sk9jr" [06a7499a-0467-433d-9e65-5352dec711cf] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "kindnet-xw86l" [8513df89-57a9-4e7a-b30f-df6c7ef5ed58] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "kube-apiserver-ha-062500" [495cdc56-2af6-4ceb-acee-26b9bc09d268] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "kube-apiserver-ha-062500-m02" [f880cb8b-d5aa-4141-8031-26951f630b43] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "kube-apiserver-ha-062500-m03" [29968640-2d8b-4694-8b0a-d6cfaaa20cdc] Running
	I0719 04:40:14.965143    8304 system_pods.go:89] "kube-controller-manager-ha-062500" [72ca647c-6a15-4408-9bc7-ba1be775d35a] Running
	I0719 04:40:14.965709    8304 system_pods.go:89] "kube-controller-manager-ha-062500-m02" [031f15e6-c214-44e4-88f7-f7636f1f4a5e] Running
	I0719 04:40:14.965709    8304 system_pods.go:89] "kube-controller-manager-ha-062500-m03" [33115099-6fd3-4486-a359-ab11c68c4f0e] Running
	I0719 04:40:14.965709    8304 system_pods.go:89] "kube-proxy-g7z8c" [a8637650-ff75-4192-90ec-acfc39f14a7f] Running
	I0719 04:40:14.965709    8304 system_pods.go:89] "kube-proxy-rtdgs" [5c014afc-3ab0-4d20-83b6-adbb9a6133ec] Running
	I0719 04:40:14.965709    8304 system_pods.go:89] "kube-proxy-wv8bn" [75f8ca14-0f7c-4e85-884c-b55161236c22] Running
	I0719 04:40:14.965709    8304 system_pods.go:89] "kube-scheduler-ha-062500" [bc127693-7c90-4778-bef4-a9aa231e89a8] Running
	I0719 04:40:14.965784    8304 system_pods.go:89] "kube-scheduler-ha-062500-m02" [37551193-9128-4afd-9653-1639d1727249] Running
	I0719 04:40:14.965784    8304 system_pods.go:89] "kube-scheduler-ha-062500-m03" [01ce36f4-8c3e-4bd7-aa4f-230aa4273049] Running
	I0719 04:40:14.965825    8304 system_pods.go:89] "kube-vip-ha-062500" [87843ee5-6fdf-473a-8818-47b1927340d6] Running
	I0719 04:40:14.965825    8304 system_pods.go:89] "kube-vip-ha-062500-m02" [8ce744ae-1492-4359-860f-f7ff13977733] Running
	I0719 04:40:14.965825    8304 system_pods.go:89] "kube-vip-ha-062500-m03" [30925675-f944-440d-a0b5-a8356bd0297b] Running
	I0719 04:40:14.965825    8304 system_pods.go:89] "storage-provisioner" [d029a307-143b-4ef5-8619-f06e267d756c] Running
	I0719 04:40:14.965865    8304 system_pods.go:126] duration metric: took 218.1658ms to wait for k8s-apps to be running ...
	I0719 04:40:14.965865    8304 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 04:40:14.976417    8304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 04:40:15.002477    8304 system_svc.go:56] duration metric: took 36.6113ms WaitForService to wait for kubelet
	I0719 04:40:15.003201    8304 kubeadm.go:582] duration metric: took 24.2636411s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 04:40:15.003201    8304 node_conditions.go:102] verifying NodePressure condition ...
	I0719 04:40:15.131876    8304 request.go:629] Waited for 128.4836ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.168.223:8443/api/v1/nodes
	I0719 04:40:15.131876    8304 round_trippers.go:463] GET https://172.28.168.223:8443/api/v1/nodes
	I0719 04:40:15.131876    8304 round_trippers.go:469] Request Headers:
	I0719 04:40:15.131876    8304 round_trippers.go:473]     Accept: application/json, */*
	I0719 04:40:15.131876    8304 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 04:40:15.136693    8304 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 04:40:15.137965    8304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 04:40:15.137965    8304 node_conditions.go:123] node cpu capacity is 2
	I0719 04:40:15.137965    8304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 04:40:15.137965    8304 node_conditions.go:123] node cpu capacity is 2
	I0719 04:40:15.137965    8304 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 04:40:15.137965    8304 node_conditions.go:123] node cpu capacity is 2
	I0719 04:40:15.137965    8304 node_conditions.go:105] duration metric: took 134.763ms to run NodePressure ...
	I0719 04:40:15.137965    8304 start.go:241] waiting for startup goroutines ...
	I0719 04:40:15.137965    8304 start.go:255] writing updated cluster config ...
	I0719 04:40:15.150405    8304 ssh_runner.go:195] Run: rm -f paused
	I0719 04:40:15.297399    8304 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 04:40:15.300922    8304 out.go:177] * Done! kubectl is now configured to use "ha-062500" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 19 04:32:14 ha-062500 cri-dockerd[1332]: time="2024-07-19T04:32:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/21e0472b810d2894b3251cdc11420cc80a585b2140cacb54c1721668a1a2c4d4/resolv.conf as [nameserver 172.28.160.1]"
	Jul 19 04:32:14 ha-062500 cri-dockerd[1332]: time="2024-07-19T04:32:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3cbe60a98d0a5f93ec9d91c28416a1e582614cd64562ebdb5222ed2c5b346786/resolv.conf as [nameserver 172.28.160.1]"
	Jul 19 04:32:14 ha-062500 cri-dockerd[1332]: time="2024-07-19T04:32:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/19818c3d9e967acec6697c474831ffd5e6f5d7e1e8a807b73819c3349b0972c6/resolv.conf as [nameserver 172.28.160.1]"
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.425926092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.426076893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.426119293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.426755698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.638377978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.638624179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.638800281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.639111783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.661004346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.661248948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.661370249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:32:14 ha-062500 dockerd[1439]: time="2024-07-19T04:32:14.663032562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:40:55 ha-062500 dockerd[1439]: time="2024-07-19T04:40:55.011712580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 04:40:55 ha-062500 dockerd[1439]: time="2024-07-19T04:40:55.011991185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 04:40:55 ha-062500 dockerd[1439]: time="2024-07-19T04:40:55.012015185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:40:55 ha-062500 dockerd[1439]: time="2024-07-19T04:40:55.012173588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:40:55 ha-062500 cri-dockerd[1332]: time="2024-07-19T04:40:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d133268d9d7a083392c792d8717340f916c2e67fdfd99b4ec0c35d377ec662c5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 19 04:40:56 ha-062500 cri-dockerd[1332]: time="2024-07-19T04:40:56Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 19 04:40:56 ha-062500 dockerd[1439]: time="2024-07-19T04:40:56.871802642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 04:40:56 ha-062500 dockerd[1439]: time="2024-07-19T04:40:56.871911543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 04:40:56 ha-062500 dockerd[1439]: time="2024-07-19T04:40:56.871948344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:40:56 ha-062500 dockerd[1439]: time="2024-07-19T04:40:56.872735553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	02a0ee65995f3       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   17 minutes ago      Running             busybox                   0                   d133268d9d7a0       busybox-fc5497c4f-drzm5
	d25c4a2b3eb6f       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   19818c3d9e967       coredns-7db6d8ff4d-jpmb4
	8f2c7b9cacfa2       cbb01a7bd410d                                                                                         26 minutes ago      Running             coredns                   0                   3cbe60a98d0a5       coredns-7db6d8ff4d-jb6nt
	0ad384904d3a4       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   21e0472b810d2       storage-provisioner
	1ecc3bacfa9d8       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              26 minutes ago      Running             kindnet-cni               0                   22b7077a7b107       kindnet-sk9jr
	a00b203469643       55bb025d2cfa5                                                                                         26 minutes ago      Running             kube-proxy                0                   b2a69508a441d       kube-proxy-wv8bn
	3042d34fba992       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     27 minutes ago      Running             kube-vip                  0                   b3a39c0b82e5c       kube-vip-ha-062500
	3db2de00e2413       76932a3b37d7e                                                                                         27 minutes ago      Running             kube-controller-manager   0                   0ae49e148da79       kube-controller-manager-ha-062500
	6f24d8e2a5f0e       3edc18e7b7672                                                                                         27 minutes ago      Running             kube-scheduler            0                   ed5864d311f88       kube-scheduler-ha-062500
	79a4c71c9c9aa       3861cfcd7c04c                                                                                         27 minutes ago      Running             etcd                      0                   b0494ed88daad       etcd-ha-062500
	0e6e869de2f3d       1f6d574d502f3                                                                                         27 minutes ago      Running             kube-apiserver            0                   14a6f4c293f91       kube-apiserver-ha-062500
	
	
	==> coredns [8f2c7b9cacfa] <==
	[INFO] 10.244.1.2:58487 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000577s
	[INFO] 10.244.0.4:60846 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000231303s
	[INFO] 10.244.0.4:38114 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.049382123s
	[INFO] 10.244.0.4:39370 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037775577s
	[INFO] 10.244.0.4:53688 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000204503s
	[INFO] 10.244.2.2:50790 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000258703s
	[INFO] 10.244.2.2:52577 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000143302s
	[INFO] 10.244.2.2:57827 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000182703s
	[INFO] 10.244.1.2:32821 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113201s
	[INFO] 10.244.1.2:60333 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000087501s
	[INFO] 10.244.1.2:45200 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000060301s
	[INFO] 10.244.1.2:52936 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133902s
	[INFO] 10.244.1.2:34981 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000601s
	[INFO] 10.244.1.2:33642 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000162302s
	[INFO] 10.244.0.4:46012 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000252303s
	[INFO] 10.244.0.4:48739 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000169203s
	[INFO] 10.244.2.2:54941 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071501s
	[INFO] 10.244.1.2:58693 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000210203s
	[INFO] 10.244.0.4:33639 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124502s
	[INFO] 10.244.0.4:44098 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000236403s
	[INFO] 10.244.0.4:52780 - 5 "PTR IN 1.160.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000213903s
	[INFO] 10.244.2.2:40272 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000246703s
	[INFO] 10.244.2.2:45577 - 5 "PTR IN 1.160.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000689s
	[INFO] 10.244.1.2:55202 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186003s
	[INFO] 10.244.1.2:45696 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000066801s
	
	
	==> coredns [d25c4a2b3eb6] <==
	[INFO] 10.244.1.2:39548 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000074201s
	[INFO] 10.244.0.4:34254 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000307804s
	[INFO] 10.244.0.4:47466 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000140002s
	[INFO] 10.244.0.4:37327 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000139201s
	[INFO] 10.244.0.4:58603 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000173802s
	[INFO] 10.244.2.2:38729 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000290604s
	[INFO] 10.244.2.2:56481 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013987377s
	[INFO] 10.244.2.2:58013 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093701s
	[INFO] 10.244.2.2:44021 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000059901s
	[INFO] 10.244.2.2:46521 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000063101s
	[INFO] 10.244.1.2:54966 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000121002s
	[INFO] 10.244.1.2:51310 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000155402s
	[INFO] 10.244.0.4:50059 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000461406s
	[INFO] 10.244.0.4:46661 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000153402s
	[INFO] 10.244.2.2:60745 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000245303s
	[INFO] 10.244.2.2:34262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101102s
	[INFO] 10.244.2.2:41051 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000061701s
	[INFO] 10.244.1.2:54731 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104401s
	[INFO] 10.244.1.2:45398 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000065401s
	[INFO] 10.244.1.2:33483 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076401s
	[INFO] 10.244.0.4:58311 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000210603s
	[INFO] 10.244.2.2:49862 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121602s
	[INFO] 10.244.2.2:59396 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000112702s
	[INFO] 10.244.1.2:35847 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154302s
	[INFO] 10.244.1.2:43744 - 5 "PTR IN 1.160.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000288804s
	
	
	==> describe nodes <==
	Name:               ha-062500
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-062500
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-062500
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T04_31_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:31:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-062500
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:58:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:56:31 +0000   Fri, 19 Jul 2024 04:31:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:56:31 +0000   Fri, 19 Jul 2024 04:31:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:56:31 +0000   Fri, 19 Jul 2024 04:31:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:56:31 +0000   Fri, 19 Jul 2024 04:32:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.168.223
	  Hostname:    ha-062500
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c1957180cbd7409b81ce8f16833129c1
	  System UUID:                0a9deb14-3d0a-ab4a-9249-9dea7abfc63c
	  Boot ID:                    aa43ec6b-25e1-4a68-98a1-a8571e0c507b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-drzm5              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-7db6d8ff4d-jb6nt             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 coredns-7db6d8ff4d-jpmb4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-ha-062500                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 kindnet-sk9jr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-ha-062500             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-controller-manager-ha-062500    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-wv8bn                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-ha-062500             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-vip-ha-062500                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26m   kube-proxy       
	  Normal  Starting                 27m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m   kubelet          Node ha-062500 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m   kubelet          Node ha-062500 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m   kubelet          Node ha-062500 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26m   node-controller  Node ha-062500 event: Registered Node ha-062500 in Controller
	  Normal  NodeReady                26m   kubelet          Node ha-062500 status is now: NodeReady
	  Normal  RegisteredNode           22m   node-controller  Node ha-062500 event: Registered Node ha-062500 in Controller
	  Normal  RegisteredNode           18m   node-controller  Node ha-062500 event: Registered Node ha-062500 in Controller
	
	
	Name:               ha-062500-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-062500-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-062500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T04_35_44_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:35:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-062500-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:58:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:56:33 +0000   Fri, 19 Jul 2024 04:35:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:56:33 +0000   Fri, 19 Jul 2024 04:35:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:56:33 +0000   Fri, 19 Jul 2024 04:35:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:56:33 +0000   Fri, 19 Jul 2024 04:36:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.171.55
	  Hostname:    ha-062500-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 8bd346714ac04d668146c1d818a78db6
	  System UUID:                9ad08af1-7bc2-f947-8ee2-852efca451b0
	  Boot ID:                    6200c2db-11c3-42fc-af25-30a73ef010cb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nkb7m                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 etcd-ha-062500-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kindnet-xw86l                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-apiserver-ha-062500-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-ha-062500-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-rtdgs                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-ha-062500-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-vip-ha-062500-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node ha-062500-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node ha-062500-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node ha-062500-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                node-controller  Node ha-062500-m02 event: Registered Node ha-062500-m02 in Controller
	  Normal  RegisteredNode           22m                node-controller  Node ha-062500-m02 event: Registered Node ha-062500-m02 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-062500-m02 event: Registered Node ha-062500-m02 in Controller
	
	
	Name:               ha-062500-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-062500-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-062500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T04_39_50_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:39:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-062500-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:58:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:56:32 +0000   Fri, 19 Jul 2024 04:39:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:56:32 +0000   Fri, 19 Jul 2024 04:39:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:56:32 +0000   Fri, 19 Jul 2024 04:39:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:56:32 +0000   Fri, 19 Jul 2024 04:40:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.161.140
	  Hostname:    ha-062500-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 fbe6ed026070475789efdbc989b15461
	  System UUID:                8b9415f4-8533-c247-b849-f016d659a93f
	  Boot ID:                    cdda7d84-08be-4b13-b094-0338e83dcd8c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-njwwk                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 etcd-ha-062500-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-g9b42                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-apiserver-ha-062500-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-ha-062500-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-g7z8c                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-ha-062500-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-vip-ha-062500-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)  kubelet          Node ha-062500-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)  kubelet          Node ha-062500-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)  kubelet          Node ha-062500-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node ha-062500-m03 event: Registered Node ha-062500-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-062500-m03 event: Registered Node ha-062500-m03 in Controller
	  Normal  RegisteredNode           18m                node-controller  Node ha-062500-m03 event: Registered Node ha-062500-m03 in Controller
	  Normal  NodeReady                18m                kubelet          Node ha-062500-m03 status is now: NodeReady
	
	
	Name:               ha-062500-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-062500-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=ha-062500
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T04_45_20_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:45:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-062500-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:58:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:56:02 +0000   Fri, 19 Jul 2024 04:45:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:56:02 +0000   Fri, 19 Jul 2024 04:45:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:56:02 +0000   Fri, 19 Jul 2024 04:45:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:56:02 +0000   Fri, 19 Jul 2024 04:45:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.171.222
	  Hostname:    ha-062500-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 65e101663298420a92d5f3d023f0f7ba
	  System UUID:                eea2d69c-37d4-1b4d-b65c-81eb181eef56
	  Boot ID:                    7ea7e894-12df-44e0-ae62-b8f430605e4d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pms48       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-6kqfz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  RegisteredNode           13m                node-controller  Node ha-062500-m04 event: Registered Node ha-062500-m04 in Controller
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)  kubelet          Node ha-062500-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)  kubelet          Node ha-062500-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)  kubelet          Node ha-062500-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node ha-062500-m04 event: Registered Node ha-062500-m04 in Controller
	  Normal  RegisteredNode           13m                node-controller  Node ha-062500-m04 event: Registered Node ha-062500-m04 in Controller
	  Normal  NodeReady                12m                kubelet          Node ha-062500-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +6.853333] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul19 04:30] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.174564] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[Jul19 04:31] systemd-fstab-generator[1007]: Ignoring "noauto" option for root device
	[  +0.144036] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.566311] systemd-fstab-generator[1048]: Ignoring "noauto" option for root device
	[  +0.200864] systemd-fstab-generator[1060]: Ignoring "noauto" option for root device
	[  +0.239060] systemd-fstab-generator[1074]: Ignoring "noauto" option for root device
	[  +2.870819] systemd-fstab-generator[1286]: Ignoring "noauto" option for root device
	[  +0.216598] systemd-fstab-generator[1298]: Ignoring "noauto" option for root device
	[  +0.209926] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.273726] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[ +11.145760] systemd-fstab-generator[1425]: Ignoring "noauto" option for root device
	[  +0.113590] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.861233] systemd-fstab-generator[1674]: Ignoring "noauto" option for root device
	[  +7.010433] systemd-fstab-generator[1873]: Ignoring "noauto" option for root device
	[  +0.107333] kauditd_printk_skb: 70 callbacks suppressed
	[  +5.623229] kauditd_printk_skb: 67 callbacks suppressed
	[  +4.908073] systemd-fstab-generator[2368]: Ignoring "noauto" option for root device
	[ +13.977150] kauditd_printk_skb: 12 callbacks suppressed
	[Jul19 04:32] kauditd_printk_skb: 29 callbacks suppressed
	[Jul19 04:35] kauditd_printk_skb: 26 callbacks suppressed
	[Jul19 04:39] hrtimer: interrupt took 1159110 ns
	
	
	==> etcd [79a4c71c9c9a] <==
	{"level":"info","ts":"2024-07-19T04:45:30.207489Z","caller":"traceutil/trace.go:171","msg":"trace[1115887213] range","detail":"{range_begin:/registry/minions/ha-062500-m04; range_end:; response_count:1; response_revision:2590; }","duration":"163.53678ms","start":"2024-07-19T04:45:30.043938Z","end":"2024-07-19T04:45:30.207475Z","steps":["trace[1115887213] 'range keys from in-memory index tree'  (duration: 162.178065ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T04:45:30.207975Z","caller":"traceutil/trace.go:171","msg":"trace[765572233] transaction","detail":"{read_only:false; response_revision:2591; number_of_response:1; }","duration":"145.568474ms","start":"2024-07-19T04:45:30.062396Z","end":"2024-07-19T04:45:30.207964Z","steps":["trace[765572233] 'process raft request'  (duration: 105.765716ms)","trace[765572233] 'compare'  (duration: 39.734457ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T04:45:30.832935Z","caller":"traceutil/trace.go:171","msg":"trace[1432151400] transaction","detail":"{read_only:false; response_revision:2593; number_of_response:1; }","duration":"133.99804ms","start":"2024-07-19T04:45:30.698917Z","end":"2024-07-19T04:45:30.832915Z","steps":["trace[1432151400] 'process raft request'  (duration: 120.876589ms)","trace[1432151400] 'compare'  (duration: 13.04925ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T04:45:31.813745Z","caller":"traceutil/trace.go:171","msg":"trace[971411315] transaction","detail":"{read_only:false; response_revision:2599; number_of_response:1; }","duration":"163.163475ms","start":"2024-07-19T04:45:31.65056Z","end":"2024-07-19T04:45:31.813724Z","steps":["trace[971411315] 'process raft request'  (duration: 69.551399ms)","trace[971411315] 'compare'  (duration: 93.261672ms)"],"step_count":2}
	{"level":"info","ts":"2024-07-19T04:45:31.815205Z","caller":"traceutil/trace.go:171","msg":"trace[529721793] linearizableReadLoop","detail":"{readStateIndex:3090; appliedIndex:3091; }","duration":"118.379961ms","start":"2024-07-19T04:45:31.696812Z","end":"2024-07-19T04:45:31.815192Z","steps":["trace[529721793] 'read index received'  (duration: 118.376861ms)","trace[529721793] 'applied index is now lower than readState.Index'  (duration: 2.5µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T04:45:31.818468Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.619298ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets/\" range_end:\"/registry/poddisruptionbudgets0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T04:45:31.818782Z","caller":"traceutil/trace.go:171","msg":"trace[762990484] range","detail":"{range_begin:/registry/poddisruptionbudgets/; range_end:/registry/poddisruptionbudgets0; response_count:0; response_revision:2599; }","duration":"121.987502ms","start":"2024-07-19T04:45:31.696783Z","end":"2024-07-19T04:45:31.818771Z","steps":["trace[762990484] 'agreement among raft nodes before linearized reading'  (duration: 121.621898ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T04:45:35.312964Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"271.871821ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-062500-m04\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-07-19T04:45:35.313065Z","caller":"traceutil/trace.go:171","msg":"trace[247922682] range","detail":"{range_begin:/registry/minions/ha-062500-m04; range_end:; response_count:1; response_revision:2607; }","duration":"272.034323ms","start":"2024-07-19T04:45:35.041018Z","end":"2024-07-19T04:45:35.313052Z","steps":["trace[247922682] 'range keys from in-memory index tree'  (duration: 270.667107ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T04:45:35.313793Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.31179ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T04:45:35.313858Z","caller":"traceutil/trace.go:171","msg":"trace[390839341] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2607; }","duration":"173.407091ms","start":"2024-07-19T04:45:35.140443Z","end":"2024-07-19T04:45:35.313851Z","steps":["trace[390839341] 'range keys from in-memory index tree'  (duration: 172.061275ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T04:45:36.408207Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"370.851956ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-062500-m04\" ","response":"range_response_count:1 size:3114"}
	{"level":"info","ts":"2024-07-19T04:45:36.408287Z","caller":"traceutil/trace.go:171","msg":"trace[350030992] range","detail":"{range_begin:/registry/minions/ha-062500-m04; range_end:; response_count:1; response_revision:2609; }","duration":"371.004358ms","start":"2024-07-19T04:45:36.037269Z","end":"2024-07-19T04:45:36.408274Z","steps":["trace[350030992] 'range keys from in-memory index tree'  (duration: 369.485341ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T04:45:36.408778Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T04:45:36.037254Z","time spent":"371.509464ms","remote":"127.0.0.1:49592","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":3137,"request content":"key:\"/registry/minions/ha-062500-m04\" "}
	{"level":"warn","ts":"2024-07-19T04:45:36.409464Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"267.037165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-07-19T04:45:36.409562Z","caller":"traceutil/trace.go:171","msg":"trace[1716466962] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2609; }","duration":"267.158566ms","start":"2024-07-19T04:45:36.142395Z","end":"2024-07-19T04:45:36.409554Z","steps":["trace[1716466962] 'range keys from in-memory index tree'  (duration: 265.169443ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T04:46:33.372123Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1935}
	{"level":"info","ts":"2024-07-19T04:46:33.424183Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1935,"took":"51.32318ms","hash":3470963645,"current-db-size-bytes":3526656,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":2318336,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-07-19T04:46:33.424383Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3470963645,"revision":1935,"compact-revision":1033}
	{"level":"info","ts":"2024-07-19T04:51:33.411269Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2765}
	{"level":"info","ts":"2024-07-19T04:51:33.458402Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":2765,"took":"46.378595ms","hash":4236002765,"current-db-size-bytes":3526656,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":2273280,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-07-19T04:51:33.458523Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4236002765,"revision":2765,"compact-revision":1935}
	{"level":"info","ts":"2024-07-19T04:56:33.455476Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":3506}
	{"level":"info","ts":"2024-07-19T04:56:33.516881Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":3506,"took":"60.348422ms","hash":3553600382,"current-db-size-bytes":3526656,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1974272,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-07-19T04:56:33.516958Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3553600382,"revision":3506,"compact-revision":2765}
	
	
	==> kernel <==
	 04:58:50 up 29 min,  0 users,  load average: 0.84, 0.45, 0.34
	Linux ha-062500 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1ecc3bacfa9d] <==
	I0719 04:58:12.680897       1 main.go:326] Node ha-062500-m04 has CIDR [10.244.3.0/24] 
	I0719 04:58:22.673102       1 main.go:299] Handling node with IPs: map[172.28.171.55:{}]
	I0719 04:58:22.673680       1 main.go:326] Node ha-062500-m02 has CIDR [10.244.1.0/24] 
	I0719 04:58:22.673874       1 main.go:299] Handling node with IPs: map[172.28.161.140:{}]
	I0719 04:58:22.673941       1 main.go:326] Node ha-062500-m03 has CIDR [10.244.2.0/24] 
	I0719 04:58:22.674018       1 main.go:299] Handling node with IPs: map[172.28.171.222:{}]
	I0719 04:58:22.674062       1 main.go:326] Node ha-062500-m04 has CIDR [10.244.3.0/24] 
	I0719 04:58:22.674386       1 main.go:299] Handling node with IPs: map[172.28.168.223:{}]
	I0719 04:58:22.674423       1 main.go:303] handling current node
	I0719 04:58:32.682509       1 main.go:299] Handling node with IPs: map[172.28.171.55:{}]
	I0719 04:58:32.682922       1 main.go:326] Node ha-062500-m02 has CIDR [10.244.1.0/24] 
	I0719 04:58:32.683536       1 main.go:299] Handling node with IPs: map[172.28.161.140:{}]
	I0719 04:58:32.683698       1 main.go:326] Node ha-062500-m03 has CIDR [10.244.2.0/24] 
	I0719 04:58:32.683909       1 main.go:299] Handling node with IPs: map[172.28.171.222:{}]
	I0719 04:58:32.684025       1 main.go:326] Node ha-062500-m04 has CIDR [10.244.3.0/24] 
	I0719 04:58:32.684426       1 main.go:299] Handling node with IPs: map[172.28.168.223:{}]
	I0719 04:58:32.684513       1 main.go:303] handling current node
	I0719 04:58:42.673162       1 main.go:299] Handling node with IPs: map[172.28.168.223:{}]
	I0719 04:58:42.673274       1 main.go:303] handling current node
	I0719 04:58:42.673337       1 main.go:299] Handling node with IPs: map[172.28.171.55:{}]
	I0719 04:58:42.673576       1 main.go:326] Node ha-062500-m02 has CIDR [10.244.1.0/24] 
	I0719 04:58:42.674055       1 main.go:299] Handling node with IPs: map[172.28.161.140:{}]
	I0719 04:58:42.674142       1 main.go:326] Node ha-062500-m03 has CIDR [10.244.2.0/24] 
	I0719 04:58:42.674462       1 main.go:299] Handling node with IPs: map[172.28.171.222:{}]
	I0719 04:58:42.674558       1 main.go:326] Node ha-062500-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [0e6e869de2f3] <==
	I0719 04:31:39.315328       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 04:31:40.848019       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 04:31:40.895909       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0719 04:31:40.916525       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 04:31:53.551949       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0719 04:31:53.751821       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0719 04:39:42.777545       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0719 04:39:42.777755       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0719 04:39:42.777909       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.3µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0719 04:39:42.779279       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0719 04:39:42.779525       1 timeout.go:142] post-timeout activity - time-elapsed: 2.133619ms, POST "/api/v1/namespaces/kube-system/pods" result: <nil>
	E0719 04:41:00.301699       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59767: use of closed network connection
	E0719 04:41:01.944408       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59769: use of closed network connection
	E0719 04:41:02.486097       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59771: use of closed network connection
	E0719 04:41:03.105744       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59773: use of closed network connection
	E0719 04:41:03.687783       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59775: use of closed network connection
	E0719 04:41:04.216101       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59777: use of closed network connection
	E0719 04:41:04.746218       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59779: use of closed network connection
	E0719 04:41:05.289713       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59781: use of closed network connection
	E0719 04:41:05.837073       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59783: use of closed network connection
	E0719 04:41:06.784764       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59786: use of closed network connection
	E0719 04:41:17.298049       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59788: use of closed network connection
	E0719 04:41:17.836088       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59791: use of closed network connection
	E0719 04:41:28.844026       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59797: use of closed network connection
	E0719 04:41:39.380033       1 conn.go:339] Error on socket receive: read tcp 172.28.175.254:8443->172.28.160.1:59799: use of closed network connection
	
	
	==> kube-controller-manager [3db2de00e241] <==
	I0719 04:39:41.994693       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-062500-m03" podCIDRs=["10.244.2.0/24"]
	I0719 04:39:43.112190       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-062500-m03"
	I0719 04:40:54.217837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="177.12062ms"
	I0719 04:40:54.300933       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.980514ms"
	I0719 04:40:54.301042       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.301µs"
	I0719 04:40:54.341695       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.701µs"
	I0719 04:40:54.344256       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.6µs"
	I0719 04:40:54.344901       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.501µs"
	I0719 04:40:54.635603       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="257.597191ms"
	I0719 04:40:54.909957       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="274.205974ms"
	I0719 04:40:54.986548       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.132998ms"
	I0719 04:40:54.986879       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="160.203µs"
	I0719 04:40:55.139086       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="42.503523ms"
	I0719 04:40:55.139641       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="51.601µs"
	I0719 04:40:57.386259       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.050841ms"
	I0719 04:40:57.387217       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="198.502µs"
	I0719 04:40:57.603333       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.839989ms"
	I0719 04:40:57.603480       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.601µs"
	I0719 04:40:57.744462       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.834104ms"
	I0719 04:40:57.744554       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.1µs"
	E0719 04:45:19.391720       1 certificate_controller.go:146] Sync csr-5jgwx failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-5jgwx": the object has been modified; please apply your changes to the latest version and try again
	I0719 04:45:19.476864       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-062500-m04\" does not exist"
	I0719 04:45:19.540330       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-062500-m04" podCIDRs=["10.244.3.0/24"]
	I0719 04:45:23.246767       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-062500-m04"
	I0719 04:45:52.846687       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-062500-m04"
	
	
	==> kube-proxy [a00b20346964] <==
	I0719 04:31:55.572666       1 server_linux.go:69] "Using iptables proxy"
	I0719 04:31:55.585939       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.168.223"]
	I0719 04:31:55.642049       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 04:31:55.642178       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 04:31:55.642199       1 server_linux.go:165] "Using iptables Proxier"
	I0719 04:31:55.647230       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 04:31:55.647842       1 server.go:872] "Version info" version="v1.30.3"
	I0719 04:31:55.648371       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:31:55.649774       1 config.go:192] "Starting service config controller"
	I0719 04:31:55.649854       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 04:31:55.649964       1 config.go:101] "Starting endpoint slice config controller"
	I0719 04:31:55.650052       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 04:31:55.650805       1 config.go:319] "Starting node config controller"
	I0719 04:31:55.650897       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 04:31:55.750471       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 04:31:55.750691       1 shared_informer.go:320] Caches are synced for service config
	I0719 04:31:55.751483       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [6f24d8e2a5f0] <==
	E0719 04:31:37.596452       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 04:31:37.754017       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 04:31:37.755177       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 04:31:37.782066       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 04:31:37.782250       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 04:31:37.798014       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 04:31:37.798060       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 04:31:37.863607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 04:31:37.863904       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 04:31:37.936229       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 04:31:37.936464       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 04:31:37.961408       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 04:31:37.961834       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0719 04:31:39.872832       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 04:40:54.167747       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-njwwk\": pod busybox-fc5497c4f-njwwk is already assigned to node \"ha-062500-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-njwwk" node="ha-062500-m02"
	E0719 04:40:54.167868       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-njwwk\": pod busybox-fc5497c4f-njwwk is already assigned to node \"ha-062500-m03\"" pod="default/busybox-fc5497c4f-njwwk"
	E0719 04:40:54.193078       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-nkb7m\": pod busybox-fc5497c4f-nkb7m is already assigned to node \"ha-062500-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-nkb7m" node="ha-062500-m03"
	E0719 04:40:54.193567       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-nkb7m\": pod busybox-fc5497c4f-nkb7m is already assigned to node \"ha-062500-m02\"" pod="default/busybox-fc5497c4f-nkb7m"
	I0719 04:40:54.201222       1 cache.go:503] "Pod was added to a different node than it was assumed" podKey="34690d25-0055-4780-9509-46acd99240e2" pod="default/busybox-fc5497c4f-drzm5" assumedNode="ha-062500" currentNode="ha-062500-m02"
	E0719 04:40:54.241366       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-drzm5\": pod busybox-fc5497c4f-drzm5 is already assigned to node \"ha-062500\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-drzm5" node="ha-062500-m02"
	E0719 04:40:54.241442       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 34690d25-0055-4780-9509-46acd99240e2(default/busybox-fc5497c4f-drzm5) was assumed on ha-062500-m02 but assigned to ha-062500" pod="default/busybox-fc5497c4f-drzm5"
	E0719 04:40:54.241466       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-drzm5\": pod busybox-fc5497c4f-drzm5 is already assigned to node \"ha-062500\"" pod="default/busybox-fc5497c4f-drzm5"
	I0719 04:40:54.241487       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-drzm5" node="ha-062500"
	E0719 04:45:19.985738       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-rjslj\": pod kindnet-rjslj is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kindnet-rjslj" node="ha-062500-m04"
	E0719 04:45:19.986605       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-rjslj\": pod kindnet-rjslj is being deleted, cannot be assigned to a host" pod="kube-system/kindnet-rjslj"
	
	
	==> kubelet <==
	Jul 19 04:54:40 ha-062500 kubelet[2375]: E0719 04:54:40.947644    2375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:54:40 ha-062500 kubelet[2375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:54:40 ha-062500 kubelet[2375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:54:40 ha-062500 kubelet[2375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:54:40 ha-062500 kubelet[2375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:55:40 ha-062500 kubelet[2375]: E0719 04:55:40.948551    2375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:55:40 ha-062500 kubelet[2375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:55:40 ha-062500 kubelet[2375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:55:40 ha-062500 kubelet[2375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:55:40 ha-062500 kubelet[2375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:56:40 ha-062500 kubelet[2375]: E0719 04:56:40.966734    2375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:56:40 ha-062500 kubelet[2375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:56:40 ha-062500 kubelet[2375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:56:40 ha-062500 kubelet[2375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:56:40 ha-062500 kubelet[2375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:57:40 ha-062500 kubelet[2375]: E0719 04:57:40.945901    2375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:57:40 ha-062500 kubelet[2375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:57:40 ha-062500 kubelet[2375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:57:40 ha-062500 kubelet[2375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:57:40 ha-062500 kubelet[2375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:58:40 ha-062500 kubelet[2375]: E0719 04:58:40.950334    2375 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:58:40 ha-062500 kubelet[2375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:58:40 ha-062500 kubelet[2375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:58:40 ha-062500 kubelet[2375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:58:40 ha-062500 kubelet[2375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 04:58:42.013337   11776 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-062500 -n ha-062500
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-062500 -n ha-062500: (12.7469291s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-062500 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (48.78s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (56.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-761300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-761300 -- exec busybox-fc5497c4f-22cdf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-761300 -- exec busybox-fc5497c4f-22cdf -- sh -c "ping -c 1 172.28.160.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-761300 -- exec busybox-fc5497c4f-22cdf -- sh -c "ping -c 1 172.28.160.1": exit status 1 (10.5500679s)

                                                
                                                
-- stdout --
	PING 172.28.160.1 (172.28.160.1): 56 data bytes
	
	--- 172.28.160.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:37:31.305518    3872 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.28.160.1) from pod (busybox-fc5497c4f-22cdf): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-761300 -- exec busybox-fc5497c4f-n4tql -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-761300 -- exec busybox-fc5497c4f-n4tql -- sh -c "ping -c 1 172.28.160.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-761300 -- exec busybox-fc5497c4f-n4tql -- sh -c "ping -c 1 172.28.160.1": exit status 1 (10.4842355s)

                                                
                                                
-- stdout --
	PING 172.28.160.1 (172.28.160.1): 56 data bytes
	
	--- 172.28.160.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:37:42.356173    6284 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.28.160.1) from pod (busybox-fc5497c4f-n4tql): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-761300 -n multinode-761300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-761300 -n multinode-761300: (11.8536756s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 logs -n 25: (8.5425174s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-009400 ssh -- ls                    | mount-start-2-009400 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:25 UTC | 19 Jul 24 05:26 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-009400                           | mount-start-1-009400 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:26 UTC | 19 Jul 24 05:26 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-009400 ssh -- ls                    | mount-start-2-009400 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:26 UTC | 19 Jul 24 05:26 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-009400                           | mount-start-2-009400 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:26 UTC | 19 Jul 24 05:27 UTC |
	| start   | -p mount-start-2-009400                           | mount-start-2-009400 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:27 UTC | 19 Jul 24 05:29 UTC |
	| mount   | C:\Users\jenkins.minikube6:/minikube-host         | mount-start-2-009400 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:29 UTC |                     |
	|         | --profile mount-start-2-009400 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-009400 ssh -- ls                    | mount-start-2-009400 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:29 UTC | 19 Jul 24 05:29 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-009400                           | mount-start-2-009400 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:29 UTC | 19 Jul 24 05:29 UTC |
	| delete  | -p mount-start-1-009400                           | mount-start-1-009400 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:29 UTC | 19 Jul 24 05:29 UTC |
	| start   | -p multinode-761300                               | multinode-761300     | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:29 UTC | 19 Jul 24 05:36 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-761300 -- apply -f                   | multinode-761300     | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:37 UTC | 19 Jul 24 05:37 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-761300 -- rollout                    | multinode-761300     | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:37 UTC | 19 Jul 24 05:37 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-761300 -- get pods -o                | multinode-761300     | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:37 UTC | 19 Jul 24 05:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-761300 -- get pods -o                | multinode-761300     | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:37 UTC | 19 Jul 24 05:37 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-761300 -- exec                       | multinode-761300     | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:37 UTC | 19 Jul 24 05:37 UTC |
	|         | busybox-fc5497c4f-22cdf --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-761300 -- exec                       | multinode-761300     | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:37 UTC | 19 Jul 24 05:37 UTC |
	|         | busybox-fc5497c4f-n4tql --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-761300 -- exec                       | multinode-761300     | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:37 UTC | 19 Jul 24 05:37 UTC |
	|         | busybox-fc5497c4f-22cdf --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-761300 -- exec                       | multinode-761300     | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:37 UTC | 19 Jul 24 05:37 UTC |
	|         | busybox-fc5497c4f-n4tql --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-761300 -- exec                       | multinode-761300     | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:37 UTC | 19 Jul 24 05:37 UTC |
	|         | busybox-fc5497c4f-22cdf -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-761300 -- exec                       | multinode-761300     | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:37 UTC | 19 Jul 24 05:37 UTC |
	|         | busybox-fc5497c4f-n4tql -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-761300 -- get pods -o                | multinode-761300     | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:37 UTC | 19 Jul 24 05:37 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-761300 -- exec                       | multinode-761300     | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:37 UTC | 19 Jul 24 05:37 UTC |
	|         | busybox-fc5497c4f-22cdf                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-761300 -- exec                       | multinode-761300     | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:37 UTC |                     |
	|         | busybox-fc5497c4f-22cdf -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.160.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-761300 -- exec                       | multinode-761300     | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:37 UTC | 19 Jul 24 05:37 UTC |
	|         | busybox-fc5497c4f-n4tql                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-761300 -- exec                       | multinode-761300     | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:37 UTC |                     |
	|         | busybox-fc5497c4f-n4tql -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.28.160.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 05:29:52
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 05:29:52.466914    2708 out.go:291] Setting OutFile to fd 700 ...
	I0719 05:29:52.467626    2708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:29:52.467626    2708 out.go:304] Setting ErrFile to fd 892...
	I0719 05:29:52.467626    2708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:29:52.491436    2708 out.go:298] Setting JSON to false
	I0719 05:29:52.494479    2708 start.go:129] hostinfo: {"hostname":"minikube6","uptime":26018,"bootTime":1721340973,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0719 05:29:52.494962    2708 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 05:29:52.501179    2708 out.go:177] * [multinode-761300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 05:29:52.505384    2708 notify.go:220] Checking for updates...
	I0719 05:29:52.508148    2708 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 05:29:52.511360    2708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 05:29:52.514133    2708 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0719 05:29:52.520077    2708 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 05:29:52.524981    2708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 05:29:52.528987    2708 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:29:52.529488    2708 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 05:29:57.944700    2708 out.go:177] * Using the hyperv driver based on user configuration
	I0719 05:29:57.949234    2708 start.go:297] selected driver: hyperv
	I0719 05:29:57.949234    2708 start.go:901] validating driver "hyperv" against <nil>
	I0719 05:29:57.949234    2708 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 05:29:57.995988    2708 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 05:29:57.997424    2708 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 05:29:57.997548    2708 cni.go:84] Creating CNI manager for ""
	I0719 05:29:57.997548    2708 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0719 05:29:57.997548    2708 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 05:29:57.997743    2708 start.go:340] cluster config:
	{Name:multinode-761300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-761300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:29:57.998042    2708 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:29:58.002548    2708 out.go:177] * Starting "multinode-761300" primary control-plane node in "multinode-761300" cluster
	I0719 05:29:58.004700    2708 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 05:29:58.004700    2708 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 05:29:58.004700    2708 cache.go:56] Caching tarball of preloaded images
	I0719 05:29:58.005450    2708 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 05:29:58.005540    2708 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 05:29:58.005540    2708 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\config.json ...
	I0719 05:29:58.005540    2708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\config.json: {Name:mk2277894c40bb398ee5281e6c68d36397492647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:29:58.007022    2708 start.go:360] acquireMachinesLock for multinode-761300: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 05:29:58.007022    2708 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-761300"
	I0719 05:29:58.007022    2708 start.go:93] Provisioning new machine with config: &{Name:multinode-761300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:multinode-761300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 05:29:58.007767    2708 start.go:125] createHost starting for "" (driver="hyperv")
	I0719 05:29:58.011428    2708 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 05:29:58.011912    2708 start.go:159] libmachine.API.Create for "multinode-761300" (driver="hyperv")
	I0719 05:29:58.011944    2708 client.go:168] LocalClient.Create starting
	I0719 05:29:58.012295    2708 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0719 05:29:58.012295    2708 main.go:141] libmachine: Decoding PEM data...
	I0719 05:29:58.012916    2708 main.go:141] libmachine: Parsing certificate...
	I0719 05:29:58.013195    2708 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0719 05:29:58.013577    2708 main.go:141] libmachine: Decoding PEM data...
	I0719 05:29:58.013667    2708 main.go:141] libmachine: Parsing certificate...
	I0719 05:29:58.013816    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0719 05:30:00.119173    2708 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0719 05:30:00.119649    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:00.119826    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0719 05:30:01.870772    2708 main.go:141] libmachine: [stdout =====>] : False
	
	I0719 05:30:01.871753    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:01.871753    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0719 05:30:03.422587    2708 main.go:141] libmachine: [stdout =====>] : True
	
	I0719 05:30:03.422587    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:03.423229    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0719 05:30:07.232853    2708 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0719 05:30:07.232853    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:07.235319    2708 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 05:30:07.776569    2708 main.go:141] libmachine: Creating SSH key...
	I0719 05:30:08.310174    2708 main.go:141] libmachine: Creating VM...
	I0719 05:30:08.310174    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0719 05:30:11.177302    2708 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0719 05:30:11.177751    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:11.177832    2708 main.go:141] libmachine: Using switch "Default Switch"
	I0719 05:30:11.177832    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0719 05:30:12.946561    2708 main.go:141] libmachine: [stdout =====>] : True
	
	I0719 05:30:12.946561    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:12.947483    2708 main.go:141] libmachine: Creating VHD
	I0719 05:30:12.947483    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0719 05:30:16.754447    2708 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : AEF73043-FFD5-4915-8AE4-A1E72A4B5A95
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0719 05:30:16.754752    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:16.754752    2708 main.go:141] libmachine: Writing magic tar header
	I0719 05:30:16.754926    2708 main.go:141] libmachine: Writing SSH key tar header
	I0719 05:30:16.764796    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0719 05:30:19.983262    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:30:19.983262    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:19.983959    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\disk.vhd' -SizeBytes 20000MB
	I0719 05:30:22.531498    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:30:22.531498    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:22.532000    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-761300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0719 05:30:26.209557    2708 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-761300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0719 05:30:26.210227    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:26.210227    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-761300 -DynamicMemoryEnabled $false
	I0719 05:30:28.508333    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:30:28.508333    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:28.509343    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-761300 -Count 2
	I0719 05:30:30.770835    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:30:30.770835    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:30.771364    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-761300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\boot2docker.iso'
	I0719 05:30:33.399331    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:30:33.399962    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:33.399962    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-761300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\disk.vhd'
	I0719 05:30:36.123703    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:30:36.123703    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:36.124668    2708 main.go:141] libmachine: Starting VM...
	I0719 05:30:36.124668    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-761300
	I0719 05:30:39.265938    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:30:39.265938    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:39.265938    2708 main.go:141] libmachine: Waiting for host to start...
	I0719 05:30:39.265938    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:30:41.589388    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:30:41.589388    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:41.590000    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:30:44.177879    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:30:44.178584    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:45.191648    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:30:47.438961    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:30:47.439321    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:47.439321    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:30:49.990893    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:30:49.990893    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:51.005534    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:30:53.257815    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:30:53.257815    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:53.258904    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:30:55.792921    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:30:55.792921    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:56.809102    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:30:59.060491    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:30:59.060491    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:30:59.060801    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:31:01.580488    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:31:01.580488    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:02.587731    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:31:04.899667    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:31:04.899667    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:04.900227    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:31:07.521994    2708 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:31:07.521994    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:07.521994    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:31:09.736222    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:31:09.736311    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:09.736311    2708 machine.go:94] provisionDockerMachine start ...
	I0719 05:31:09.736395    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:31:11.945069    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:31:11.945710    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:11.945763    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:31:14.559241    2708 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:31:14.559241    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:14.565323    2708 main.go:141] libmachine: Using SSH client type: native
	I0719 05:31:14.577275    2708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.16 22 <nil> <nil>}
	I0719 05:31:14.577275    2708 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 05:31:14.707515    2708 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 05:31:14.707633    2708 buildroot.go:166] provisioning hostname "multinode-761300"
	I0719 05:31:14.707741    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:31:16.879551    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:31:16.879551    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:16.879551    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:31:19.471735    2708 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:31:19.471735    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:19.478024    2708 main.go:141] libmachine: Using SSH client type: native
	I0719 05:31:19.478579    2708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.16 22 <nil> <nil>}
	I0719 05:31:19.478579    2708 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-761300 && echo "multinode-761300" | sudo tee /etc/hostname
	I0719 05:31:19.642273    2708 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-761300
	
	I0719 05:31:19.642389    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:31:21.816289    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:31:21.816289    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:21.816289    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:31:24.404839    2708 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:31:24.404839    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:24.413370    2708 main.go:141] libmachine: Using SSH client type: native
	I0719 05:31:24.413940    2708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.16 22 <nil> <nil>}
	I0719 05:31:24.413940    2708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-761300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-761300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-761300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 05:31:24.561967    2708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 05:31:24.562131    2708 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 05:31:24.562131    2708 buildroot.go:174] setting up certificates
	I0719 05:31:24.562131    2708 provision.go:84] configureAuth start
	I0719 05:31:24.562269    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:31:26.841817    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:31:26.842849    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:26.842917    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:31:29.548191    2708 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:31:29.549040    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:29.549190    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:31:31.771807    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:31:31.771807    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:31.771919    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:31:34.494727    2708 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:31:34.494778    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:34.494778    2708 provision.go:143] copyHostCerts
	I0719 05:31:34.494778    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0719 05:31:34.495323    2708 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0719 05:31:34.495524    2708 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0719 05:31:34.495581    2708 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 05:31:34.496883    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0719 05:31:34.497576    2708 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0719 05:31:34.497576    2708 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0719 05:31:34.497972    2708 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 05:31:34.499122    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0719 05:31:34.499439    2708 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0719 05:31:34.499439    2708 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0719 05:31:34.499439    2708 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 05:31:34.501137    2708 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-761300 san=[127.0.0.1 172.28.162.16 localhost minikube multinode-761300]
	I0719 05:31:34.658223    2708 provision.go:177] copyRemoteCerts
	I0719 05:31:34.670284    2708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 05:31:34.670284    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:31:36.830317    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:31:36.830317    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:36.831254    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:31:39.451527    2708 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:31:39.451527    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:39.452238    2708 sshutil.go:53] new ssh client: &{IP:172.28.162.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\id_rsa Username:docker}
	I0719 05:31:39.548400    2708 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8780588s)
	I0719 05:31:39.548543    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0719 05:31:39.548946    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 05:31:39.593532    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0719 05:31:39.594580    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0719 05:31:39.638229    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0719 05:31:39.638229    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 05:31:39.683456    2708 provision.go:87] duration metric: took 15.1211467s to configureAuth
	I0719 05:31:39.683530    2708 buildroot.go:189] setting minikube options for container-runtime
	I0719 05:31:39.684231    2708 config.go:182] Loaded profile config "multinode-761300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:31:39.684284    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:31:41.841714    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:31:41.842193    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:41.842193    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:31:44.426759    2708 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:31:44.426759    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:44.432204    2708 main.go:141] libmachine: Using SSH client type: native
	I0719 05:31:44.432375    2708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.16 22 <nil> <nil>}
	I0719 05:31:44.432375    2708 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 05:31:44.565130    2708 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 05:31:44.565284    2708 buildroot.go:70] root file system type: tmpfs
	I0719 05:31:44.565495    2708 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 05:31:44.565689    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:31:46.728795    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:31:46.728795    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:46.728795    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:31:49.361831    2708 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:31:49.361831    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:49.367559    2708 main.go:141] libmachine: Using SSH client type: native
	I0719 05:31:49.368157    2708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.16 22 <nil> <nil>}
	I0719 05:31:49.368157    2708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 05:31:49.517058    2708 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 05:31:49.517250    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:31:51.720465    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:31:51.720465    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:51.720614    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:31:54.308424    2708 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:31:54.308424    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:54.313821    2708 main.go:141] libmachine: Using SSH client type: native
	I0719 05:31:54.313985    2708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.16 22 <nil> <nil>}
	I0719 05:31:54.313985    2708 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 05:31:56.564462    2708 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 05:31:56.565065    2708 machine.go:97] duration metric: took 46.8275992s to provisionDockerMachine
	I0719 05:31:56.565065    2708 client.go:171] duration metric: took 1m58.5516726s to LocalClient.Create
	I0719 05:31:56.565118    2708 start.go:167] duration metric: took 1m58.5518501s to libmachine.API.Create "multinode-761300"
	I0719 05:31:56.565118    2708 start.go:293] postStartSetup for "multinode-761300" (driver="hyperv")
	I0719 05:31:56.565188    2708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 05:31:56.577865    2708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 05:31:56.577865    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:31:58.784643    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:31:58.784833    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:31:58.784833    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:32:01.436643    2708 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:32:01.436643    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:32:01.436643    2708 sshutil.go:53] new ssh client: &{IP:172.28.162.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\id_rsa Username:docker}
	I0719 05:32:01.550038    2708 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9716394s)
	I0719 05:32:01.562782    2708 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 05:32:01.572188    2708 command_runner.go:130] > NAME=Buildroot
	I0719 05:32:01.572188    2708 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 05:32:01.572287    2708 command_runner.go:130] > ID=buildroot
	I0719 05:32:01.572287    2708 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 05:32:01.572287    2708 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 05:32:01.572396    2708 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 05:32:01.572466    2708 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 05:32:01.572917    2708 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 05:32:01.573768    2708 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
	I0719 05:32:01.573768    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /etc/ssl/certs/96042.pem
	I0719 05:32:01.586850    2708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 05:32:01.606770    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
	I0719 05:32:01.652002    2708 start.go:296] duration metric: took 5.0868244s for postStartSetup
	I0719 05:32:01.655712    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:32:03.822617    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:32:03.823096    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:32:03.823201    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:32:06.415965    2708 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:32:06.416944    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:32:06.417086    2708 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\config.json ...
	I0719 05:32:06.420172    2708 start.go:128] duration metric: took 2m8.4108902s to createHost
	I0719 05:32:06.420857    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:32:08.633616    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:32:08.633616    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:32:08.634376    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:32:11.226025    2708 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:32:11.226025    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:32:11.232933    2708 main.go:141] libmachine: Using SSH client type: native
	I0719 05:32:11.233886    2708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.16 22 <nil> <nil>}
	I0719 05:32:11.233886    2708 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 05:32:11.360809    2708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721367131.378607273
	
	I0719 05:32:11.360809    2708 fix.go:216] guest clock: 1721367131.378607273
	I0719 05:32:11.360809    2708 fix.go:229] Guest: 2024-07-19 05:32:11.378607273 +0000 UTC Remote: 2024-07-19 05:32:06.4201728 +0000 UTC m=+134.117191201 (delta=4.958434473s)
	I0719 05:32:11.360809    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:32:13.541558    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:32:13.541558    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:32:13.541558    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:32:16.125723    2708 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:32:16.126799    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:32:16.133403    2708 main.go:141] libmachine: Using SSH client type: native
	I0719 05:32:16.134154    2708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.16 22 <nil> <nil>}
	I0719 05:32:16.134154    2708 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721367131
	I0719 05:32:16.274728    2708 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 05:32:11 UTC 2024
	
	I0719 05:32:16.274728    2708 fix.go:236] clock set: Fri Jul 19 05:32:11 UTC 2024
	 (err=<nil>)
	I0719 05:32:16.274728    2708 start.go:83] releasing machines lock for "multinode-761300", held for 2m18.2660743s
	I0719 05:32:16.275406    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:32:18.451040    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:32:18.451040    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:32:18.451832    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:32:21.008415    2708 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:32:21.008415    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:32:21.013466    2708 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 05:32:21.013652    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:32:21.023248    2708 ssh_runner.go:195] Run: cat /version.json
	I0719 05:32:21.024271    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:32:23.283490    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:32:23.283490    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:32:23.283627    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:32:23.295082    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:32:23.295082    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:32:23.295082    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:32:26.038560    2708 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:32:26.038746    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:32:26.038746    2708 sshutil.go:53] new ssh client: &{IP:172.28.162.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\id_rsa Username:docker}
	I0719 05:32:26.057595    2708 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:32:26.057595    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:32:26.057847    2708 sshutil.go:53] new ssh client: &{IP:172.28.162.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\id_rsa Username:docker}
	I0719 05:32:26.126192    2708 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0719 05:32:26.126265    2708 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1127389s)
	W0719 05:32:26.126265    2708 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 05:32:26.143150    2708 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0719 05:32:26.143150    2708 ssh_runner.go:235] Completed: cat /version.json: (5.1198421s)
	I0719 05:32:26.156106    2708 ssh_runner.go:195] Run: systemctl --version
	I0719 05:32:26.164282    2708 command_runner.go:130] > systemd 252 (252)
	I0719 05:32:26.164282    2708 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0719 05:32:26.178156    2708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 05:32:26.186331    2708 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0719 05:32:26.187424    2708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 05:32:26.199730    2708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	W0719 05:32:26.223414    2708 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 05:32:26.223414    2708 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 05:32:26.241487    2708 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0719 05:32:26.241532    2708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 05:32:26.241532    2708 start.go:495] detecting cgroup driver to use...
	I0719 05:32:26.241882    2708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 05:32:26.277335    2708 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0719 05:32:26.290201    2708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 05:32:26.320139    2708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 05:32:26.337835    2708 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 05:32:26.349459    2708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 05:32:26.383178    2708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 05:32:26.413568    2708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 05:32:26.442826    2708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 05:32:26.473043    2708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 05:32:26.501579    2708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 05:32:26.532180    2708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 05:32:26.562038    2708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 05:32:26.592971    2708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 05:32:26.611275    2708 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 05:32:26.623443    2708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 05:32:26.652017    2708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:32:26.845695    2708 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 05:32:26.874461    2708 start.go:495] detecting cgroup driver to use...
	I0719 05:32:26.888091    2708 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 05:32:26.913397    2708 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0719 05:32:26.913397    2708 command_runner.go:130] > [Unit]
	I0719 05:32:26.913397    2708 command_runner.go:130] > Description=Docker Application Container Engine
	I0719 05:32:26.913397    2708 command_runner.go:130] > Documentation=https://docs.docker.com
	I0719 05:32:26.913397    2708 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0719 05:32:26.913397    2708 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0719 05:32:26.913397    2708 command_runner.go:130] > StartLimitBurst=3
	I0719 05:32:26.913397    2708 command_runner.go:130] > StartLimitIntervalSec=60
	I0719 05:32:26.913397    2708 command_runner.go:130] > [Service]
	I0719 05:32:26.913397    2708 command_runner.go:130] > Type=notify
	I0719 05:32:26.913397    2708 command_runner.go:130] > Restart=on-failure
	I0719 05:32:26.913397    2708 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0719 05:32:26.913397    2708 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0719 05:32:26.913397    2708 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0719 05:32:26.913397    2708 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0719 05:32:26.913397    2708 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0719 05:32:26.913397    2708 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0719 05:32:26.913397    2708 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0719 05:32:26.913397    2708 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0719 05:32:26.913397    2708 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0719 05:32:26.913397    2708 command_runner.go:130] > ExecStart=
	I0719 05:32:26.913939    2708 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0719 05:32:26.913989    2708 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0719 05:32:26.913989    2708 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0719 05:32:26.914031    2708 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0719 05:32:26.914031    2708 command_runner.go:130] > LimitNOFILE=infinity
	I0719 05:32:26.914065    2708 command_runner.go:130] > LimitNPROC=infinity
	I0719 05:32:26.914065    2708 command_runner.go:130] > LimitCORE=infinity
	I0719 05:32:26.914065    2708 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0719 05:32:26.914108    2708 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0719 05:32:26.914108    2708 command_runner.go:130] > TasksMax=infinity
	I0719 05:32:26.914108    2708 command_runner.go:130] > TimeoutStartSec=0
	I0719 05:32:26.914148    2708 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0719 05:32:26.914148    2708 command_runner.go:130] > Delegate=yes
	I0719 05:32:26.914148    2708 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0719 05:32:26.914148    2708 command_runner.go:130] > KillMode=process
	I0719 05:32:26.914148    2708 command_runner.go:130] > [Install]
	I0719 05:32:26.914204    2708 command_runner.go:130] > WantedBy=multi-user.target
	I0719 05:32:26.926428    2708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 05:32:26.960178    2708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 05:32:27.002131    2708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 05:32:27.036387    2708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 05:32:27.076774    2708 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 05:32:27.136731    2708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 05:32:27.160256    2708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 05:32:27.198144    2708 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0719 05:32:27.210704    2708 ssh_runner.go:195] Run: which cri-dockerd
	I0719 05:32:27.217079    2708 command_runner.go:130] > /usr/bin/cri-dockerd
	I0719 05:32:27.231266    2708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 05:32:27.248283    2708 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 05:32:27.293703    2708 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 05:32:27.494348    2708 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 05:32:27.684412    2708 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 05:32:27.684412    2708 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 05:32:27.728584    2708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:32:27.922939    2708 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 05:32:30.504535    2708 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5815029s)
	I0719 05:32:30.517780    2708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 05:32:30.557001    2708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 05:32:30.589905    2708 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 05:32:30.778636    2708 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 05:32:30.968344    2708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:32:31.187990    2708 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 05:32:31.227818    2708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 05:32:31.261234    2708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:32:31.455350    2708 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 05:32:31.561399    2708 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 05:32:31.573693    2708 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 05:32:31.582356    2708 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0719 05:32:31.582424    2708 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0719 05:32:31.582424    2708 command_runner.go:130] > Device: 0,22	Inode: 882         Links: 1
	I0719 05:32:31.582424    2708 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0719 05:32:31.582547    2708 command_runner.go:130] > Access: 2024-07-19 05:32:31.498981813 +0000
	I0719 05:32:31.582547    2708 command_runner.go:130] > Modify: 2024-07-19 05:32:31.498981813 +0000
	I0719 05:32:31.582547    2708 command_runner.go:130] > Change: 2024-07-19 05:32:31.502981828 +0000
	I0719 05:32:31.582547    2708 command_runner.go:130] >  Birth: -
	I0719 05:32:31.582611    2708 start.go:563] Will wait 60s for crictl version
	I0719 05:32:31.593998    2708 ssh_runner.go:195] Run: which crictl
	I0719 05:32:31.599890    2708 command_runner.go:130] > /usr/bin/crictl
	I0719 05:32:31.610745    2708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 05:32:31.661983    2708 command_runner.go:130] > Version:  0.1.0
	I0719 05:32:31.662258    2708 command_runner.go:130] > RuntimeName:  docker
	I0719 05:32:31.662258    2708 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0719 05:32:31.662258    2708 command_runner.go:130] > RuntimeApiVersion:  v1
	I0719 05:32:31.662258    2708 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 05:32:31.671618    2708 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 05:32:31.703211    2708 command_runner.go:130] > 27.0.3
	I0719 05:32:31.712268    2708 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 05:32:31.743437    2708 command_runner.go:130] > 27.0.3
	I0719 05:32:31.758729    2708 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0719 05:32:31.759056    2708 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0719 05:32:31.763671    2708 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0719 05:32:31.763671    2708 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0719 05:32:31.764189    2708 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0719 05:32:31.764189    2708 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:e9:18 Flags:up|broadcast|multicast|running}
	I0719 05:32:31.766897    2708 ip.go:210] interface addr: fe80::1dc5:162d:cec2:b9bd/64
	I0719 05:32:31.766897    2708 ip.go:210] interface addr: 172.28.160.1/20
	I0719 05:32:31.778266    2708 ssh_runner.go:195] Run: grep 172.28.160.1	host.minikube.internal$ /etc/hosts
	I0719 05:32:31.784417    2708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 05:32:31.806372    2708 kubeadm.go:883] updating cluster {Name:multinode-761300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:multinode-761300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.162.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 05:32:31.806457    2708 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 05:32:31.815204    2708 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 05:32:31.836825    2708 docker.go:685] Got preloaded images: 
	I0719 05:32:31.836825    2708 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0719 05:32:31.849077    2708 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 05:32:31.867025    2708 command_runner.go:139] > {"Repositories":{}}
	I0719 05:32:31.878087    2708 ssh_runner.go:195] Run: which lz4
	I0719 05:32:31.883243    2708 command_runner.go:130] > /usr/bin/lz4
	I0719 05:32:31.883787    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0719 05:32:31.895482    2708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 05:32:31.901035    2708 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 05:32:31.901601    2708 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 05:32:31.901804    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0719 05:32:33.752651    2708 docker.go:649] duration metric: took 1.8681963s to copy over tarball
	I0719 05:32:33.765089    2708 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 05:32:42.331423    2708 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.5662326s)
	I0719 05:32:42.331516    2708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 05:32:42.390889    2708 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 05:32:42.408900    2708 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0719 05:32:42.408900    2708 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0719 05:32:42.453770    2708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:32:42.658817    2708 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 05:32:45.996722    2708 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3377573s)
	I0719 05:32:46.005956    2708 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 05:32:46.031557    2708 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0719 05:32:46.031557    2708 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0719 05:32:46.031557    2708 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0719 05:32:46.031721    2708 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0719 05:32:46.031721    2708 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0719 05:32:46.031721    2708 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0719 05:32:46.031721    2708 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0719 05:32:46.031721    2708 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 05:32:46.031791    2708 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 05:32:46.031857    2708 cache_images.go:84] Images are preloaded, skipping loading
	I0719 05:32:46.031914    2708 kubeadm.go:934] updating node { 172.28.162.16 8443 v1.30.3 docker true true} ...
	I0719 05:32:46.032178    2708 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-761300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.162.16
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-761300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 05:32:46.041818    2708 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 05:32:46.078411    2708 command_runner.go:130] > cgroupfs
	I0719 05:32:46.079522    2708 cni.go:84] Creating CNI manager for ""
	I0719 05:32:46.079618    2708 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 05:32:46.079618    2708 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 05:32:46.079673    2708 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.162.16 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-761300 NodeName:multinode-761300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.162.16"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.162.16 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 05:32:46.079983    2708 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.162.16
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-761300"
	  kubeletExtraArgs:
	    node-ip: 172.28.162.16
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.162.16"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 05:32:46.091425    2708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 05:32:46.110525    2708 command_runner.go:130] > kubeadm
	I0719 05:32:46.110525    2708 command_runner.go:130] > kubectl
	I0719 05:32:46.110525    2708 command_runner.go:130] > kubelet
	I0719 05:32:46.110525    2708 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 05:32:46.121490    2708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 05:32:46.138503    2708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0719 05:32:46.168511    2708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 05:32:46.203520    2708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0719 05:32:46.245857    2708 ssh_runner.go:195] Run: grep 172.28.162.16	control-plane.minikube.internal$ /etc/hosts
	I0719 05:32:46.251869    2708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.162.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 05:32:46.292497    2708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:32:46.485227    2708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 05:32:46.516256    2708 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300 for IP: 172.28.162.16
	I0719 05:32:46.516256    2708 certs.go:194] generating shared ca certs ...
	I0719 05:32:46.516256    2708 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:32:46.517365    2708 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0719 05:32:46.517365    2708 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0719 05:32:46.518232    2708 certs.go:256] generating profile certs ...
	I0719 05:32:46.518232    2708 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\client.key
	I0719 05:32:46.518232    2708 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\client.crt with IP's: []
	I0719 05:32:47.229696    2708 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\client.crt ...
	I0719 05:32:47.229696    2708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\client.crt: {Name:mk8a3d1b806174e4d9803025a509bd9d3d324dee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:32:47.231660    2708 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\client.key ...
	I0719 05:32:47.231660    2708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\client.key: {Name:mk6023bb88ad0c508f0f6e2c2b253c45e356af37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:32:47.232136    2708 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key.6f213ef3
	I0719 05:32:47.232136    2708 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt.6f213ef3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.162.16]
	I0719 05:32:47.527792    2708 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt.6f213ef3 ...
	I0719 05:32:47.527792    2708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt.6f213ef3: {Name:mk50d25fa75a81e6ba2e48c4ef596fc0b353d42b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:32:47.529340    2708 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key.6f213ef3 ...
	I0719 05:32:47.529340    2708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key.6f213ef3: {Name:mkcc32d20e83e1d49b0b84cfe99c00f7b2e242d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:32:47.530339    2708 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt.6f213ef3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt
	I0719 05:32:47.542338    2708 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key.6f213ef3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key
	I0719 05:32:47.543471    2708 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\proxy-client.key
	I0719 05:32:47.543902    2708 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\proxy-client.crt with IP's: []
	I0719 05:32:47.816672    2708 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\proxy-client.crt ...
	I0719 05:32:47.817615    2708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\proxy-client.crt: {Name:mk5200b621c5291451c423f9df7d5235f9017cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:32:47.818553    2708 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\proxy-client.key ...
	I0719 05:32:47.818553    2708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\proxy-client.key: {Name:mk8ae92e2c9de8ec1a02f7539f1df3a32fabe1bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:32:47.819845    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 05:32:47.820226    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0719 05:32:47.820226    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 05:32:47.820226    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 05:32:47.820226    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 05:32:47.820226    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 05:32:47.820226    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 05:32:47.828977    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 05:32:47.831149    2708 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem (1338 bytes)
	W0719 05:32:47.832124    2708 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604_empty.pem, impossibly tiny 0 bytes
	I0719 05:32:47.832124    2708 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0719 05:32:47.832419    2708 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0719 05:32:47.832983    2708 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0719 05:32:47.833064    2708 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0719 05:32:47.833680    2708 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem (1708 bytes)
	I0719 05:32:47.833680    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem -> /usr/share/ca-certificates/9604.pem
	I0719 05:32:47.834287    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /usr/share/ca-certificates/96042.pem
	I0719 05:32:47.834287    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:32:47.835422    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 05:32:47.880937    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 05:32:47.924307    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 05:32:47.972137    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 05:32:48.016142    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 05:32:48.063614    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 05:32:48.108367    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 05:32:48.153246    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 05:32:48.200914    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem --> /usr/share/ca-certificates/9604.pem (1338 bytes)
	I0719 05:32:48.245082    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /usr/share/ca-certificates/96042.pem (1708 bytes)
	I0719 05:32:48.288140    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 05:32:48.334515    2708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 05:32:48.377746    2708 ssh_runner.go:195] Run: openssl version
	I0719 05:32:48.386226    2708 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0719 05:32:48.396985    2708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9604.pem && ln -fs /usr/share/ca-certificates/9604.pem /etc/ssl/certs/9604.pem"
	I0719 05:32:48.428061    2708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9604.pem
	I0719 05:32:48.434065    2708 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 19 03:46 /usr/share/ca-certificates/9604.pem
	I0719 05:32:48.434253    2708 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:46 /usr/share/ca-certificates/9604.pem
	I0719 05:32:48.447066    2708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9604.pem
	I0719 05:32:48.455394    2708 command_runner.go:130] > 51391683
	I0719 05:32:48.468449    2708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9604.pem /etc/ssl/certs/51391683.0"
	I0719 05:32:48.501268    2708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96042.pem && ln -fs /usr/share/ca-certificates/96042.pem /etc/ssl/certs/96042.pem"
	I0719 05:32:48.532787    2708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96042.pem
	I0719 05:32:48.540010    2708 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 19 03:46 /usr/share/ca-certificates/96042.pem
	I0719 05:32:48.540010    2708 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:46 /usr/share/ca-certificates/96042.pem
	I0719 05:32:48.553698    2708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96042.pem
	I0719 05:32:48.563104    2708 command_runner.go:130] > 3ec20f2e
	I0719 05:32:48.575107    2708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96042.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 05:32:48.604458    2708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 05:32:48.636644    2708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:32:48.644154    2708 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 19 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:32:48.644244    2708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:32:48.655744    2708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:32:48.666094    2708 command_runner.go:130] > b5213941
	I0719 05:32:48.677907    2708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 05:32:48.707585    2708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 05:32:48.714641    2708 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 05:32:48.715563    2708 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 05:32:48.715946    2708 kubeadm.go:392] StartCluster: {Name:multinode-761300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:multinode-761300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.162.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:32:48.727148    2708 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 05:32:48.767259    2708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 05:32:48.787786    2708 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0719 05:32:48.788096    2708 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0719 05:32:48.788177    2708 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0719 05:32:48.800175    2708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 05:32:48.830399    2708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 05:32:48.846247    2708 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0719 05:32:48.846867    2708 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0719 05:32:48.846867    2708 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0719 05:32:48.846867    2708 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 05:32:48.846867    2708 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 05:32:48.846867    2708 kubeadm.go:157] found existing configuration files:
	
	I0719 05:32:48.859872    2708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 05:32:48.876600    2708 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 05:32:48.876600    2708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 05:32:48.887839    2708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 05:32:48.920184    2708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 05:32:48.932645    2708 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 05:32:48.933720    2708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 05:32:48.945262    2708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 05:32:48.972017    2708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 05:32:48.987425    2708 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 05:32:48.987848    2708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 05:32:48.999569    2708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 05:32:49.030685    2708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 05:32:49.047587    2708 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 05:32:49.048223    2708 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 05:32:49.062507    2708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 05:32:49.080346    2708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 05:32:49.289714    2708 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0719 05:32:49.290181    2708 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0719 05:32:49.290249    2708 kubeadm.go:310] [preflight] Running pre-flight checks
	I0719 05:32:49.290249    2708 command_runner.go:130] > [preflight] Running pre-flight checks
	I0719 05:32:49.462103    2708 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 05:32:49.462103    2708 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 05:32:49.462103    2708 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 05:32:49.462103    2708 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 05:32:49.462103    2708 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 05:32:49.462103    2708 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 05:32:49.745969    2708 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 05:32:49.746002    2708 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 05:32:49.753134    2708 out.go:204]   - Generating certificates and keys ...
	I0719 05:32:49.753134    2708 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0719 05:32:49.753134    2708 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0719 05:32:49.753134    2708 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0719 05:32:49.753657    2708 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0719 05:32:50.020715    2708 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 05:32:50.020715    2708 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 05:32:50.437956    2708 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0719 05:32:50.438113    2708 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0719 05:32:50.792019    2708 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0719 05:32:50.792019    2708 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0719 05:32:50.869138    2708 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0719 05:32:50.869138    2708 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0719 05:32:51.233734    2708 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0719 05:32:51.233797    2708 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0719 05:32:51.234957    2708 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-761300] and IPs [172.28.162.16 127.0.0.1 ::1]
	I0719 05:32:51.234957    2708 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-761300] and IPs [172.28.162.16 127.0.0.1 ::1]
	I0719 05:32:51.364668    2708 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0719 05:32:51.364668    2708 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0719 05:32:51.364668    2708 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-761300] and IPs [172.28.162.16 127.0.0.1 ::1]
	I0719 05:32:51.364668    2708 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-761300] and IPs [172.28.162.16 127.0.0.1 ::1]
	I0719 05:32:51.592279    2708 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 05:32:51.592279    2708 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 05:32:51.929224    2708 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 05:32:51.929224    2708 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 05:32:52.084927    2708 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0719 05:32:52.085107    2708 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0719 05:32:52.085258    2708 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 05:32:52.085297    2708 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 05:32:52.253298    2708 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 05:32:52.253298    2708 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 05:32:52.413251    2708 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 05:32:52.413323    2708 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 05:32:52.638868    2708 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 05:32:52.638868    2708 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 05:32:53.009584    2708 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 05:32:53.010392    2708 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 05:32:53.313827    2708 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 05:32:53.313827    2708 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 05:32:53.316180    2708 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 05:32:53.316286    2708 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 05:32:53.323910    2708 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 05:32:53.324440    2708 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 05:32:53.328394    2708 out.go:204]   - Booting up control plane ...
	I0719 05:32:53.328589    2708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 05:32:53.328589    2708 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 05:32:53.328589    2708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 05:32:53.328589    2708 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 05:32:53.328589    2708 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 05:32:53.328589    2708 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 05:32:53.351664    2708 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 05:32:53.351664    2708 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 05:32:53.352854    2708 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 05:32:53.352854    2708 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 05:32:53.352854    2708 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0719 05:32:53.352854    2708 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0719 05:32:53.562578    2708 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 05:32:53.562578    2708 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0719 05:32:53.562578    2708 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 05:32:53.562578    2708 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 05:32:54.064605    2708 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.658295ms
	I0719 05:32:54.064671    2708 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.658295ms
	I0719 05:32:54.064671    2708 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 05:32:54.064671    2708 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0719 05:33:01.066987    2708 command_runner.go:130] > [api-check] The API server is healthy after 7.00223783s
	I0719 05:33:01.066987    2708 kubeadm.go:310] [api-check] The API server is healthy after 7.00223783s
	I0719 05:33:01.092099    2708 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 05:33:01.092099    2708 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 05:33:01.119600    2708 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 05:33:01.119672    2708 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 05:33:01.157229    2708 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0719 05:33:01.157229    2708 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 05:33:01.157629    2708 kubeadm.go:310] [mark-control-plane] Marking the node multinode-761300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 05:33:01.157629    2708 command_runner.go:130] > [mark-control-plane] Marking the node multinode-761300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 05:33:01.177397    2708 kubeadm.go:310] [bootstrap-token] Using token: w2hih0.p5l0gq8tw1zl6eiw
	I0719 05:33:01.177397    2708 command_runner.go:130] > [bootstrap-token] Using token: w2hih0.p5l0gq8tw1zl6eiw
	I0719 05:33:01.182240    2708 out.go:204]   - Configuring RBAC rules ...
	I0719 05:33:01.182240    2708 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 05:33:01.182240    2708 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 05:33:01.188473    2708 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 05:33:01.188473    2708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 05:33:01.211366    2708 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 05:33:01.211366    2708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 05:33:01.218340    2708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 05:33:01.218340    2708 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 05:33:01.225591    2708 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 05:33:01.225591    2708 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 05:33:01.230581    2708 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 05:33:01.231374    2708 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 05:33:01.478916    2708 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 05:33:01.478916    2708 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 05:33:01.938086    2708 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0719 05:33:01.938086    2708 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0719 05:33:02.478859    2708 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0719 05:33:02.478508    2708 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0719 05:33:02.481930    2708 kubeadm.go:310] 
	I0719 05:33:02.482252    2708 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0719 05:33:02.482252    2708 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0719 05:33:02.482252    2708 kubeadm.go:310] 
	I0719 05:33:02.482493    2708 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0719 05:33:02.482550    2708 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0719 05:33:02.482550    2708 kubeadm.go:310] 
	I0719 05:33:02.482646    2708 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0719 05:33:02.482646    2708 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0719 05:33:02.482812    2708 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 05:33:02.482812    2708 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 05:33:02.482812    2708 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 05:33:02.482812    2708 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 05:33:02.482812    2708 kubeadm.go:310] 
	I0719 05:33:02.482812    2708 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0719 05:33:02.482812    2708 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0719 05:33:02.482812    2708 kubeadm.go:310] 
	I0719 05:33:02.483515    2708 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 05:33:02.483515    2708 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 05:33:02.483565    2708 kubeadm.go:310] 
	I0719 05:33:02.483652    2708 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0719 05:33:02.483652    2708 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0719 05:33:02.483903    2708 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 05:33:02.483959    2708 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 05:33:02.484095    2708 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 05:33:02.484163    2708 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 05:33:02.484163    2708 kubeadm.go:310] 
	I0719 05:33:02.484447    2708 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 05:33:02.484447    2708 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0719 05:33:02.484701    2708 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0719 05:33:02.484701    2708 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0719 05:33:02.484796    2708 kubeadm.go:310] 
	I0719 05:33:02.485079    2708 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token w2hih0.p5l0gq8tw1zl6eiw \
	I0719 05:33:02.485079    2708 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token w2hih0.p5l0gq8tw1zl6eiw \
	I0719 05:33:02.485352    2708 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 \
	I0719 05:33:02.485436    2708 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 \
	I0719 05:33:02.485489    2708 kubeadm.go:310] 	--control-plane 
	I0719 05:33:02.485597    2708 command_runner.go:130] > 	--control-plane 
	I0719 05:33:02.485597    2708 kubeadm.go:310] 
	I0719 05:33:02.485938    2708 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0719 05:33:02.485938    2708 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0719 05:33:02.486014    2708 kubeadm.go:310] 
	I0719 05:33:02.486147    2708 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token w2hih0.p5l0gq8tw1zl6eiw \
	I0719 05:33:02.486201    2708 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token w2hih0.p5l0gq8tw1zl6eiw \
	I0719 05:33:02.486201    2708 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 
	I0719 05:33:02.486201    2708 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 
	I0719 05:33:02.487585    2708 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 05:33:02.487649    2708 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 05:33:02.487649    2708 cni.go:84] Creating CNI manager for ""
	I0719 05:33:02.487649    2708 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0719 05:33:02.492746    2708 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0719 05:33:02.505182    2708 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0719 05:33:02.513370    2708 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0719 05:33:02.513370    2708 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0719 05:33:02.513370    2708 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0719 05:33:02.513370    2708 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 05:33:02.513370    2708 command_runner.go:130] > Access: 2024-07-19 05:31:03.886717400 +0000
	I0719 05:33:02.513370    2708 command_runner.go:130] > Modify: 2024-07-18 23:04:21.000000000 +0000
	I0719 05:33:02.513370    2708 command_runner.go:130] > Change: 2024-07-19 05:30:55.645000000 +0000
	I0719 05:33:02.513370    2708 command_runner.go:130] >  Birth: -
	I0719 05:33:02.513370    2708 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0719 05:33:02.513370    2708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0719 05:33:02.566424    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0719 05:33:03.154519    2708 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0719 05:33:03.164681    2708 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0719 05:33:03.181970    2708 command_runner.go:130] > serviceaccount/kindnet created
	I0719 05:33:03.213020    2708 command_runner.go:130] > daemonset.apps/kindnet created
	I0719 05:33:03.218593    2708 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 05:33:03.235308    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:03.235308    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-761300 minikube.k8s.io/updated_at=2024_07_19T05_33_03_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=multinode-761300 minikube.k8s.io/primary=true
	I0719 05:33:03.255064    2708 command_runner.go:130] > -16
	I0719 05:33:03.256766    2708 ops.go:34] apiserver oom_adj: -16
	I0719 05:33:03.451825    2708 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0719 05:33:03.465200    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:03.495848    2708 command_runner.go:130] > node/multinode-761300 labeled
	I0719 05:33:03.581026    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:03.972025    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:04.091698    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:04.465117    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:04.570089    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:04.976927    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:05.080787    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:05.467649    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:05.605616    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:05.968299    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:06.081154    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:06.468632    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:06.583821    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:06.972533    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:07.092555    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:07.477927    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:07.586012    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:07.966692    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:08.095089    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:08.466125    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:08.568925    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:08.972024    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:09.079991    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:09.474601    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:09.586875    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:09.974699    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:10.087546    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:10.476369    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:10.584875    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:10.975689    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:11.086333    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:11.479136    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:11.593317    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:11.966978    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:12.092066    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:12.472271    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:12.581931    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:12.973931    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:13.093771    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:13.481252    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:13.582496    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:13.966108    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:14.111454    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:14.469121    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:14.588157    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:14.975876    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:15.095229    2708 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0719 05:33:15.466082    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 05:33:15.607094    2708 command_runner.go:130] > NAME      SECRETS   AGE
	I0719 05:33:15.607094    2708 command_runner.go:130] > default   0         0s
	I0719 05:33:15.607703    2708 kubeadm.go:1113] duration metric: took 12.3889621s to wait for elevateKubeSystemPrivileges
	I0719 05:33:15.607746    2708 kubeadm.go:394] duration metric: took 26.8916255s to StartCluster
	I0719 05:33:15.607859    2708 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:33:15.608095    2708 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 05:33:15.611992    2708 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:33:15.613656    2708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 05:33:15.613656    2708 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 05:33:15.613772    2708 start.go:235] Will wait 6m0s for node &{Name: IP:172.28.162.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 05:33:15.613849    2708 addons.go:69] Setting default-storageclass=true in profile "multinode-761300"
	I0719 05:33:15.613772    2708 addons.go:69] Setting storage-provisioner=true in profile "multinode-761300"
	I0719 05:33:15.613929    2708 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-761300"
	I0719 05:33:15.613960    2708 addons.go:234] Setting addon storage-provisioner=true in "multinode-761300"
	I0719 05:33:15.614429    2708 host.go:66] Checking if "multinode-761300" exists ...
	I0719 05:33:15.614534    2708 config.go:182] Loaded profile config "multinode-761300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:33:15.614964    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:33:15.619136    2708 out.go:177] * Verifying Kubernetes components...
	I0719 05:33:15.620164    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:33:15.638121    2708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:33:15.947342    2708 command_runner.go:130] > apiVersion: v1
	I0719 05:33:15.947420    2708 command_runner.go:130] > data:
	I0719 05:33:15.947420    2708 command_runner.go:130] >   Corefile: |
	I0719 05:33:15.947420    2708 command_runner.go:130] >     .:53 {
	I0719 05:33:15.947420    2708 command_runner.go:130] >         errors
	I0719 05:33:15.947517    2708 command_runner.go:130] >         health {
	I0719 05:33:15.947517    2708 command_runner.go:130] >            lameduck 5s
	I0719 05:33:15.947517    2708 command_runner.go:130] >         }
	I0719 05:33:15.947582    2708 command_runner.go:130] >         ready
	I0719 05:33:15.947603    2708 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0719 05:33:15.947603    2708 command_runner.go:130] >            pods insecure
	I0719 05:33:15.947603    2708 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0719 05:33:15.947603    2708 command_runner.go:130] >            ttl 30
	I0719 05:33:15.947603    2708 command_runner.go:130] >         }
	I0719 05:33:15.947668    2708 command_runner.go:130] >         prometheus :9153
	I0719 05:33:15.947703    2708 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0719 05:33:15.947703    2708 command_runner.go:130] >            max_concurrent 1000
	I0719 05:33:15.947703    2708 command_runner.go:130] >         }
	I0719 05:33:15.947703    2708 command_runner.go:130] >         cache 30
	I0719 05:33:15.947703    2708 command_runner.go:130] >         loop
	I0719 05:33:15.947770    2708 command_runner.go:130] >         reload
	I0719 05:33:15.947770    2708 command_runner.go:130] >         loadbalance
	I0719 05:33:15.947770    2708 command_runner.go:130] >     }
	I0719 05:33:15.947770    2708 command_runner.go:130] > kind: ConfigMap
	I0719 05:33:15.947807    2708 command_runner.go:130] > metadata:
	I0719 05:33:15.947823    2708 command_runner.go:130] >   creationTimestamp: "2024-07-19T05:33:01Z"
	I0719 05:33:15.947823    2708 command_runner.go:130] >   name: coredns
	I0719 05:33:15.947823    2708 command_runner.go:130] >   namespace: kube-system
	I0719 05:33:15.947823    2708 command_runner.go:130] >   resourceVersion: "226"
	I0719 05:33:15.947897    2708 command_runner.go:130] >   uid: 85212dc0-aa1e-45c8-b10d-091e2d4b6a4f
	I0719 05:33:15.948023    2708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.28.160.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 05:33:16.009456    2708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 05:33:16.473315    2708 command_runner.go:130] > configmap/coredns replaced
	I0719 05:33:16.473315    2708 start.go:971] {"host.minikube.internal": 172.28.160.1} host record injected into CoreDNS's ConfigMap
	I0719 05:33:16.474587    2708 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 05:33:16.474587    2708 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 05:33:16.475417    2708 kapi.go:59] client config for multinode-761300: &rest.Config{Host:"https://172.28.162.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-761300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-761300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 05:33:16.475947    2708 kapi.go:59] client config for multinode-761300: &rest.Config{Host:"https://172.28.162.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-761300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-761300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 05:33:16.477081    2708 cert_rotation.go:137] Starting client certificate rotation controller
	I0719 05:33:16.477671    2708 node_ready.go:35] waiting up to 6m0s for node "multinode-761300" to be "Ready" ...
	I0719 05:33:16.477671    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:16.477671    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:16.477671    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:16.477671    2708 round_trippers.go:463] GET https://172.28.162.16:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0719 05:33:16.478223    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:16.478223    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:16.478223    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:16.477671    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:16.507039    2708 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0719 05:33:16.507039    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:16.507139    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:16.507139    2708 round_trippers.go:580]     Content-Length: 291
	I0719 05:33:16.507139    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:16 GMT
	I0719 05:33:16.507139    2708 round_trippers.go:580]     Audit-Id: 249aea67-6313-49ed-a995-d71f73af84df
	I0719 05:33:16.507139    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:16.507139    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:16.507242    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:16.507301    2708 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8c54989b-21b9-4188-9cf6-0800f8d16c09","resourceVersion":"351","creationTimestamp":"2024-07-19T05:33:01Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0719 05:33:16.508254    2708 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8c54989b-21b9-4188-9cf6-0800f8d16c09","resourceVersion":"351","creationTimestamp":"2024-07-19T05:33:01Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0719 05:33:16.508355    2708 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I0719 05:33:16.508355    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:16.508355    2708 round_trippers.go:463] PUT https://172.28.162.16:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0719 05:33:16.508355    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:16 GMT
	I0719 05:33:16.508355    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:16.508355    2708 round_trippers.go:580]     Audit-Id: 4337b73d-3202-4e44-8377-27e17f40940b
	I0719 05:33:16.508355    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:16.508355    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:16.508355    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:16.508355    2708 round_trippers.go:473]     Content-Type: application/json
	I0719 05:33:16.508355    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:16.508355    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:16.508355    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:16.508912    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:16.534442    2708 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0719 05:33:16.534442    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:16.534442    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:16 GMT
	I0719 05:33:16.534442    2708 round_trippers.go:580]     Audit-Id: 1806052e-1f6b-47f9-a847-7f12c62ae4c1
	I0719 05:33:16.534442    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:16.534442    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:16.534442    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:16.534442    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:16.534442    2708 round_trippers.go:580]     Content-Length: 291
	I0719 05:33:16.534442    2708 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8c54989b-21b9-4188-9cf6-0800f8d16c09","resourceVersion":"353","creationTimestamp":"2024-07-19T05:33:01Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0719 05:33:16.992794    2708 round_trippers.go:463] GET https://172.28.162.16:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0719 05:33:16.992894    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:16.992894    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:16.992894    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:16.992794    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:16.992997    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:16.992997    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:16.992997    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:17.000228    2708 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 05:33:17.000324    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:17.000324    2708 round_trippers.go:580]     Audit-Id: 008b3c2b-817a-4631-907a-39e41681ca51
	I0719 05:33:17.000324    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:17.000324    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:17.000324    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:17.000324    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:17.000324    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:17 GMT
	I0719 05:33:17.000672    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:17.002266    2708 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0719 05:33:17.002266    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:17.003264    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:17.003264    2708 round_trippers.go:580]     Content-Length: 291
	I0719 05:33:17.003264    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:17 GMT
	I0719 05:33:17.003264    2708 round_trippers.go:580]     Audit-Id: 899e847e-2d01-4988-95c4-31c324402f17
	I0719 05:33:17.003264    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:17.003264    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:17.003317    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:17.003443    2708 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"8c54989b-21b9-4188-9cf6-0800f8d16c09","resourceVersion":"363","creationTimestamp":"2024-07-19T05:33:01Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0719 05:33:17.003530    2708 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-761300" context rescaled to 1 replicas
	I0719 05:33:17.481848    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:17.481946    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:17.481946    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:17.481946    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:17.486322    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:33:17.486468    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:17.486468    2708 round_trippers.go:580]     Audit-Id: 7ff5d404-dccd-4779-940a-ad24373ef5ad
	I0719 05:33:17.486468    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:17.486468    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:17.486468    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:17.486468    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:17.486468    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:17 GMT
	I0719 05:33:17.486735    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:17.991969    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:17.992038    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:17.992038    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:17.992038    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:17.996156    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:33:17.996156    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:17.996156    2708 round_trippers.go:580]     Audit-Id: f683f1c2-dba3-4460-8d9f-0817fcb77b43
	I0719 05:33:17.996156    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:17.996156    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:17.996156    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:17.996156    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:17.996707    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:18 GMT
	I0719 05:33:17.996789    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:18.037243    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:33:18.038047    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:33:18.039229    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:33:18.039304    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:33:18.039421    2708 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 05:33:18.040386    2708 kapi.go:59] client config for multinode-761300: &rest.Config{Host:"https://172.28.162.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-761300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-761300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 05:33:18.042123    2708 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 05:33:18.042951    2708 addons.go:234] Setting addon default-storageclass=true in "multinode-761300"
	I0719 05:33:18.043110    2708 host.go:66] Checking if "multinode-761300" exists ...
	I0719 05:33:18.044192    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:33:18.044573    2708 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 05:33:18.044573    2708 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 05:33:18.045176    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:33:18.483995    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:18.484211    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:18.484211    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:18.484211    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:18.488478    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:33:18.488907    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:18.488907    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:18.488907    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:18.488907    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:18.488907    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:18.488907    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:18 GMT
	I0719 05:33:18.488907    2708 round_trippers.go:580]     Audit-Id: 19befb17-5aa4-41b9-bfed-e9d821d50a9b
	I0719 05:33:18.489324    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:18.489819    2708 node_ready.go:53] node "multinode-761300" has status "Ready":"False"
	I0719 05:33:18.992446    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:18.992573    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:18.992573    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:18.992573    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:18.996866    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:33:18.996866    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:18.996866    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:18.996866    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:19 GMT
	I0719 05:33:18.996866    2708 round_trippers.go:580]     Audit-Id: 1c6257a1-d876-4cf5-96c9-e4dd15b415ac
	I0719 05:33:18.996866    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:18.996866    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:18.996866    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:18.996866    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:19.484817    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:19.484817    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:19.484897    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:19.484897    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:19.692059    2708 round_trippers.go:574] Response Status: 200 OK in 207 milliseconds
	I0719 05:33:19.692059    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:19.692059    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:19.692059    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:19 GMT
	I0719 05:33:19.692059    2708 round_trippers.go:580]     Audit-Id: 68eca5c9-810b-4266-b9bb-7818a1184764
	I0719 05:33:19.692059    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:19.692059    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:19.692059    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:19.692277    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:19.992481    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:19.992551    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:19.992551    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:19.992551    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:20.019161    2708 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0719 05:33:20.019161    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:20.019161    2708 round_trippers.go:580]     Audit-Id: 4d535568-4e27-43d3-8c15-c6e1e6c679da
	I0719 05:33:20.019161    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:20.019598    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:20.019598    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:20.019598    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:20.019598    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:20 GMT
	I0719 05:33:20.019888    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:20.481259    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:20.481259    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:20.481259    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:20.481259    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:20.485314    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:33:20.485314    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:20.485875    2708 round_trippers.go:580]     Audit-Id: 67855c37-6c86-4d00-843a-123e5f08bff5
	I0719 05:33:20.485875    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:20.485875    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:20.485875    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:20.485875    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:20.485875    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:20 GMT
	I0719 05:33:20.486121    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:20.489258    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:33:20.489286    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:33:20.489361    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:33:20.646016    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:33:20.646016    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:33:20.646016    2708 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 05:33:20.647015    2708 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 05:33:20.647015    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:33:20.990031    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:20.990122    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:20.990197    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:20.990197    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:20.995026    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:33:20.995026    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:20.995026    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:20.995026    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:20.995026    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:20.995026    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:21 GMT
	I0719 05:33:20.995026    2708 round_trippers.go:580]     Audit-Id: f0c3c0c2-22e4-4ce9-92fb-ba1f222690d9
	I0719 05:33:20.995026    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:20.995026    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:20.996031    2708 node_ready.go:53] node "multinode-761300" has status "Ready":"False"
	I0719 05:33:21.480686    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:21.480686    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:21.480686    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:21.480686    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:21.484275    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:21.484275    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:21.484275    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:21 GMT
	I0719 05:33:21.484275    2708 round_trippers.go:580]     Audit-Id: d4444f0d-f44c-4421-8803-3c06182a98a1
	I0719 05:33:21.484275    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:21.484275    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:21.484275    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:21.484275    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:21.484275    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:21.988773    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:21.989079    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:21.989135    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:21.989135    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:21.992579    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:21.992579    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:21.992579    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:21.992579    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:21.992579    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:21.992579    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:21.992579    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:22 GMT
	I0719 05:33:21.992579    2708 round_trippers.go:580]     Audit-Id: 6063e40d-b818-42f5-8644-e906ace75df2
	I0719 05:33:21.993693    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:22.478954    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:22.478954    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:22.478954    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:22.478954    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:22.662668    2708 round_trippers.go:574] Response Status: 200 OK in 183 milliseconds
	I0719 05:33:22.663322    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:22.663322    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:22.663409    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:22.663409    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:22 GMT
	I0719 05:33:22.663499    2708 round_trippers.go:580]     Audit-Id: 939285ac-3371-4bb2-8674-120b6ed7269f
	I0719 05:33:22.663499    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:22.663499    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:22.665820    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:22.985438    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:22.985651    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:22.985651    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:22.985651    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:22.989120    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:22.989120    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:22.989120    2708 round_trippers.go:580]     Audit-Id: 411d7d92-5d6d-4ddf-bf68-d55bc55b3a43
	I0719 05:33:22.989120    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:22.989120    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:22.989120    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:22.989120    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:22.989500    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:23 GMT
	I0719 05:33:22.990957    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:23.051455    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:33:23.052407    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:33:23.052407    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:33:23.316481    2708 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:33:23.316545    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:33:23.316545    2708 sshutil.go:53] new ssh client: &{IP:172.28.162.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\id_rsa Username:docker}
	I0719 05:33:23.468076    2708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 05:33:23.491539    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:23.491539    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:23.491618    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:23.491618    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:23.495107    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:23.495393    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:23.495393    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:23.495393    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:23.495393    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:23 GMT
	I0719 05:33:23.495393    2708 round_trippers.go:580]     Audit-Id: b843de8d-8066-4aad-9e07-87a0e9730ec1
	I0719 05:33:23.495393    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:23.495393    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:23.495716    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:23.496384    2708 node_ready.go:53] node "multinode-761300" has status "Ready":"False"
	I0719 05:33:23.983541    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:23.983615    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:23.983615    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:23.983615    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:23.986522    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:33:23.986522    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:23.986522    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:23.986522    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:24 GMT
	I0719 05:33:23.986522    2708 round_trippers.go:580]     Audit-Id: 449f1b55-713c-417b-b6a2-5675d13bec63
	I0719 05:33:23.986522    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:23.986522    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:23.986522    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:23.987091    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:24.098809    2708 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0719 05:33:24.098922    2708 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0719 05:33:24.098922    2708 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0719 05:33:24.098922    2708 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0719 05:33:24.099016    2708 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0719 05:33:24.099088    2708 command_runner.go:130] > pod/storage-provisioner created
	I0719 05:33:24.488369    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:24.488461    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:24.488461    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:24.488461    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:24.492043    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:24.492155    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:24.492155    2708 round_trippers.go:580]     Audit-Id: 0f0ced78-d4f5-4a98-a9a4-dd3504173dc5
	I0719 05:33:24.492155    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:24.492155    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:24.492155    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:24.492155    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:24.492155    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:24 GMT
	I0719 05:33:24.492852    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:24.980065    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:24.980065    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:24.980065    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:24.980065    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:24.986204    2708 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 05:33:24.986579    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:24.986579    2708 round_trippers.go:580]     Audit-Id: da8a8657-1a2b-40a0-90bc-3de0952f25a3
	I0719 05:33:24.986579    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:24.986579    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:24.986579    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:24.986579    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:24.986579    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:25 GMT
	I0719 05:33:24.986801    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:25.490656    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:25.490864    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:25.490864    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:25.490864    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:25.497287    2708 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 05:33:25.497287    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:25.497287    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:25.497287    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:25.497287    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:25.497287    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:25 GMT
	I0719 05:33:25.497287    2708 round_trippers.go:580]     Audit-Id: 78c497bc-6d2f-43f4-b0f1-f0389b5ce59e
	I0719 05:33:25.497505    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:25.498006    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:25.498452    2708 node_ready.go:53] node "multinode-761300" has status "Ready":"False"
	I0719 05:33:25.657967    2708 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:33:25.657967    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:33:25.658428    2708 sshutil.go:53] new ssh client: &{IP:172.28.162.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\id_rsa Username:docker}
	I0719 05:33:25.784829    2708 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 05:33:25.931562    2708 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0719 05:33:25.932466    2708 round_trippers.go:463] GET https://172.28.162.16:8443/apis/storage.k8s.io/v1/storageclasses
	I0719 05:33:25.932584    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:25.932619    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:25.932619    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:25.935010    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:33:25.935010    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:25.935010    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:25.935010    2708 round_trippers.go:580]     Content-Length: 1273
	I0719 05:33:25.935010    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:25 GMT
	I0719 05:33:25.935010    2708 round_trippers.go:580]     Audit-Id: 864b6e42-4996-48f9-9273-f4004692c7de
	I0719 05:33:25.935010    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:25.935486    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:25.935486    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:25.935687    2708 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"387"},"items":[{"metadata":{"name":"standard","uid":"620c1f6b-e43e-45ae-8bf8-dd6dbd1daa04","resourceVersion":"387","creationTimestamp":"2024-07-19T05:33:25Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-19T05:33:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0719 05:33:25.936489    2708 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"620c1f6b-e43e-45ae-8bf8-dd6dbd1daa04","resourceVersion":"387","creationTimestamp":"2024-07-19T05:33:25Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-19T05:33:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0719 05:33:25.936587    2708 round_trippers.go:463] PUT https://172.28.162.16:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0719 05:33:25.936635    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:25.936635    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:25.936635    2708 round_trippers.go:473]     Content-Type: application/json
	I0719 05:33:25.936635    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:25.939429    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:33:25.939429    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:25.939730    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:25 GMT
	I0719 05:33:25.939730    2708 round_trippers.go:580]     Audit-Id: 51b417b1-b1d0-45f4-b186-f130d8758fe6
	I0719 05:33:25.939730    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:25.939791    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:25.939791    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:25.939791    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:25.939791    2708 round_trippers.go:580]     Content-Length: 1220
	I0719 05:33:25.939858    2708 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"620c1f6b-e43e-45ae-8bf8-dd6dbd1daa04","resourceVersion":"387","creationTimestamp":"2024-07-19T05:33:25Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-07-19T05:33:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0719 05:33:25.946333    2708 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0719 05:33:25.949522    2708 addons.go:510] duration metric: took 10.3357434s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0719 05:33:25.991334    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:25.991334    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:25.991334    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:25.991334    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:25.996635    2708 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:33:25.996635    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:25.996782    2708 round_trippers.go:580]     Audit-Id: 6bf6cab6-a38e-4d00-9b5a-390a26502a02
	I0719 05:33:25.996782    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:25.996782    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:25.996782    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:25.996782    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:25.996782    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:26 GMT
	I0719 05:33:25.997063    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:26.490739    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:26.490803    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:26.490803    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:26.490803    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:26.494544    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:26.494544    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:26.495217    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:26.495217    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:26 GMT
	I0719 05:33:26.495217    2708 round_trippers.go:580]     Audit-Id: 8e5738a8-425e-4750-ba1a-2cbf5c2125dd
	I0719 05:33:26.495217    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:26.495217    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:26.495217    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:26.495696    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:26.990256    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:26.990256    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:26.990256    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:26.990256    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:26.993993    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:26.993993    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:26.993993    2708 round_trippers.go:580]     Audit-Id: 8b0a2914-4081-4b42-8eb3-986402312d36
	I0719 05:33:26.993993    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:26.994887    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:26.995002    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:26.995002    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:26.995002    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:27 GMT
	I0719 05:33:26.995126    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:27.480297    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:27.480297    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:27.480297    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:27.480297    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:27.482984    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:33:27.483977    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:27.483977    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:27.483977    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:27.484052    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:27 GMT
	I0719 05:33:27.484052    2708 round_trippers.go:580]     Audit-Id: 64c000dc-5d49-49fb-8ace-f292482e99a8
	I0719 05:33:27.484052    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:27.484052    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:27.484052    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:27.989910    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:27.989910    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:27.989910    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:27.989910    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:27.993559    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:27.993559    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:27.993559    2708 round_trippers.go:580]     Audit-Id: b4cc08d3-f232-44c4-b6a6-a768a8f941ca
	I0719 05:33:27.994387    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:27.994387    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:27.994387    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:27.994387    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:27.994387    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:28 GMT
	I0719 05:33:27.994457    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:27.995195    2708 node_ready.go:53] node "multinode-761300" has status "Ready":"False"
	I0719 05:33:28.487512    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:28.487645    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:28.487645    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:28.487746    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:28.492013    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:33:28.492161    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:28.492161    2708 round_trippers.go:580]     Audit-Id: 1de862d0-7162-4545-82bf-0a07c0a496a0
	I0719 05:33:28.492161    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:28.492161    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:28.492233    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:28.492233    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:28.492233    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:28 GMT
	I0719 05:33:28.492430    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:28.985320    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:28.985320    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:28.985320    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:28.985320    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:28.989975    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:33:28.989975    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:28.989975    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:29 GMT
	I0719 05:33:28.989975    2708 round_trippers.go:580]     Audit-Id: 9da5ad25-13ce-41c0-8e1f-ef93346270b7
	I0719 05:33:28.989975    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:28.990547    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:28.990547    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:28.990547    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:28.991029    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:29.485121    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:29.485121    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:29.485121    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:29.485121    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:29.488729    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:29.489633    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:29.489633    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:29 GMT
	I0719 05:33:29.489633    2708 round_trippers.go:580]     Audit-Id: 6cbc8f35-4564-4458-9685-acaec2e7ff1d
	I0719 05:33:29.489633    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:29.489633    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:29.489633    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:29.489633    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:29.490036    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:29.981044    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:29.981044    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:29.981141    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:29.981141    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:29.986004    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:33:29.986004    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:29.986128    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:29.986128    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:30 GMT
	I0719 05:33:29.986128    2708 round_trippers.go:580]     Audit-Id: 3b7d7b60-0783-43ed-b7fc-9553a3155be9
	I0719 05:33:29.986128    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:29.986128    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:29.986128    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:29.986433    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:30.479744    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:30.479744    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:30.479824    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:30.479824    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:30.483745    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:30.483745    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:30.483745    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:30.483745    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:30 GMT
	I0719 05:33:30.483745    2708 round_trippers.go:580]     Audit-Id: 27809430-74d7-46bf-9f1a-cd2e74c04bd3
	I0719 05:33:30.483745    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:30.483745    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:30.484187    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:30.484187    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:30.484995    2708 node_ready.go:53] node "multinode-761300" has status "Ready":"False"
	I0719 05:33:30.979163    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:30.979275    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:30.979275    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:30.979275    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:30.984031    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:33:30.984031    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:30.984031    2708 round_trippers.go:580]     Audit-Id: fa56bb84-6135-4f6c-8c2b-87b75394fe6f
	I0719 05:33:30.984031    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:30.984031    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:30.984031    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:30.984031    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:30.984031    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:31 GMT
	I0719 05:33:30.985048    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:31.479565    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:31.479665    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:31.479665    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:31.479665    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:31.484605    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:33:31.484838    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:31.484838    2708 round_trippers.go:580]     Audit-Id: 6e290b90-0ac8-4fab-815c-5334221df6b2
	I0719 05:33:31.484838    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:31.484838    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:31.484838    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:31.484838    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:31.484917    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:31 GMT
	I0719 05:33:31.484917    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:31.978853    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:31.979102    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:31.979102    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:31.979102    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:31.985887    2708 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 05:33:31.985887    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:31.985887    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:31.985887    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:31.985887    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:32 GMT
	I0719 05:33:31.985887    2708 round_trippers.go:580]     Audit-Id: 1ba7134c-f45b-4fc9-9588-539ecc3434d2
	I0719 05:33:31.985887    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:31.985887    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:31.986459    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:32.483108    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:32.483203    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:32.483203    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:32.483203    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:32.488744    2708 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:33:32.488804    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:32.488914    2708 round_trippers.go:580]     Audit-Id: d2b21399-a686-4577-91e5-de63c4f06ccf
	I0719 05:33:32.488914    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:32.489016    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:32.489069    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:32.489069    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:32.489069    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:32 GMT
	I0719 05:33:32.489069    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"307","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0719 05:33:32.489838    2708 node_ready.go:53] node "multinode-761300" has status "Ready":"False"
	I0719 05:33:32.981284    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:32.981284    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:32.981417    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:32.981417    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:32.986141    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:33:32.986226    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:32.986226    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:32.986226    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:32.986226    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:32.986226    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:32.986299    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:33 GMT
	I0719 05:33:32.986299    2708 round_trippers.go:580]     Audit-Id: e533c7a6-6d34-4997-bc85-9b5a55b96737
	I0719 05:33:32.986570    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"391","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0719 05:33:33.491336    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:33.491423    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:33.491423    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:33.491423    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:33.495218    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:33.495218    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:33.495931    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:33.495931    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:33.495931    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:33.495931    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:33.495931    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:33 GMT
	I0719 05:33:33.495931    2708 round_trippers.go:580]     Audit-Id: 20edc9b8-70d0-49e3-843b-cedb3254f2d6
	I0719 05:33:33.496240    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"391","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0719 05:33:33.978359    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:33.978547    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:33.978547    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:33.978547    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:33.982432    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:33.982432    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:33.982432    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:33.982432    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:33.982432    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:34 GMT
	I0719 05:33:33.982432    2708 round_trippers.go:580]     Audit-Id: 8ab7c421-98c6-4139-8c3f-208e951cd564
	I0719 05:33:33.982432    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:33.982432    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:33.983420    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"391","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0719 05:33:34.480400    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:34.480400    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:34.480400    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:34.480400    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:34.483985    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:34.483985    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:34.483985    2708 round_trippers.go:580]     Audit-Id: 7acede3a-2e9d-4d08-96d8-c06141e77466
	I0719 05:33:34.483985    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:34.483985    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:34.483985    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:34.483985    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:34.484524    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:34 GMT
	I0719 05:33:34.484712    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"391","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0719 05:33:34.981916    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:34.981916    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:34.981916    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:34.981916    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:34.985499    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:34.985499    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:34.985499    2708 round_trippers.go:580]     Audit-Id: 7efd7339-c339-437c-9cd8-5dfd0f8c8216
	I0719 05:33:34.985499    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:34.985499    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:34.985499    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:34.985499    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:34.985499    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:35 GMT
	I0719 05:33:34.986157    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"391","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0719 05:33:34.986716    2708 node_ready.go:53] node "multinode-761300" has status "Ready":"False"
	I0719 05:33:35.481273    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:35.481503    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:35.481533    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:35.481533    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:35.485694    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:33:35.485820    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:35.485820    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:35 GMT
	I0719 05:33:35.485820    2708 round_trippers.go:580]     Audit-Id: e8f5df24-ce95-48ba-b2e1-de7da8bfbbea
	I0719 05:33:35.485820    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:35.485820    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:35.485820    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:35.485820    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:35.486737    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"391","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0719 05:33:35.981389    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:35.981457    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:35.981457    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:35.981457    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:35.985821    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:33:35.985928    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:35.985928    2708 round_trippers.go:580]     Audit-Id: 991c935c-b8c3-4add-a239-471970c8e499
	I0719 05:33:35.985928    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:35.985928    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:35.985928    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:35.985928    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:35.985928    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:36 GMT
	I0719 05:33:35.985928    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"391","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0719 05:33:36.478211    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:36.478211    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:36.478211    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:36.478451    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:36.483535    2708 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:33:36.483535    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:36.483535    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:36 GMT
	I0719 05:33:36.483535    2708 round_trippers.go:580]     Audit-Id: 41d0deca-bea0-4097-8835-46780a9f3f21
	I0719 05:33:36.483535    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:36.483535    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:36.483535    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:36.483535    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:36.484420    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"391","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 5104 chars]
	I0719 05:33:36.980235    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:36.980235    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:36.980235    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:36.980235    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:36.983286    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:33:36.983286    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:36.983286    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:36.983286    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:36.983286    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:36.983286    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:37 GMT
	I0719 05:33:36.983286    2708 round_trippers.go:580]     Audit-Id: ec1a9d78-d960-4eab-9cf5-d52110fcb396
	I0719 05:33:36.983286    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:36.983286    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"394","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0719 05:33:36.984179    2708 node_ready.go:49] node "multinode-761300" has status "Ready":"True"
	I0719 05:33:36.984179    2708 node_ready.go:38] duration metric: took 20.5062635s for node "multinode-761300" to be "Ready" ...
	I0719 05:33:36.984179    2708 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:33:36.984179    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods
	I0719 05:33:36.984179    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:36.984179    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:36.984179    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:36.991070    2708 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 05:33:36.992146    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:36.992146    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:36.992146    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:36.992146    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:37 GMT
	I0719 05:33:36.992146    2708 round_trippers.go:580]     Audit-Id: 690f9b31-612c-4413-abac-f870a66934b1
	I0719 05:33:36.992146    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:36.992146    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:36.993585    2708 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"400"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"398","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56337 chars]
	I0719 05:33:36.999025    2708 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace to be "Ready" ...
	I0719 05:33:36.999154    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:33:36.999228    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:36.999228    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:36.999283    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:37.003139    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:37.003139    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:37.003139    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:37.003139    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:37.003139    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:37 GMT
	I0719 05:33:37.003139    2708 round_trippers.go:580]     Audit-Id: 7729edf1-f75d-432e-848b-fd6e95f250d8
	I0719 05:33:37.003139    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:37.003139    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:37.003139    2708 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"398","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0719 05:33:37.003139    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:37.003139    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:37.003139    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:37.003139    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:37.007923    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:33:37.007923    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:37.007923    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:37 GMT
	I0719 05:33:37.007923    2708 round_trippers.go:580]     Audit-Id: 0d7dcae7-750c-49b6-9bd9-79cffa6e97eb
	I0719 05:33:37.007923    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:37.007923    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:37.007923    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:37.007923    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:37.007923    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"394","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0719 05:33:37.505442    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:33:37.505442    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:37.505521    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:37.505521    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:37.513714    2708 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 05:33:37.513898    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:37.513924    2708 round_trippers.go:580]     Audit-Id: d06bc220-493c-4f40-a089-f827f11ddfaa
	I0719 05:33:37.513924    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:37.513924    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:37.513924    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:37.513924    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:37.513924    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:37 GMT
	I0719 05:33:37.514002    2708 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"398","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0719 05:33:37.514779    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:37.514779    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:37.514779    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:37.514779    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:37.521796    2708 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 05:33:37.521796    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:37.521796    2708 round_trippers.go:580]     Audit-Id: 606862de-5524-4aad-9028-a41ff18186ee
	I0719 05:33:37.521796    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:37.521796    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:37.521796    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:37.521796    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:37.522144    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:37 GMT
	I0719 05:33:37.522466    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"394","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0719 05:33:37.999130    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:33:37.999130    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:37.999130    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:37.999130    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:38.005148    2708 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 05:33:38.005148    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:38.005393    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:38.005393    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:38.005393    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:38.005393    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:38.005393    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:38 GMT
	I0719 05:33:38.005393    2708 round_trippers.go:580]     Audit-Id: 7e8e3013-e683-4500-8bdd-472135ec6c4a
	I0719 05:33:38.005512    2708 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"398","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0719 05:33:38.006146    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:38.006146    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:38.006146    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:38.006146    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:38.009353    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:38.009816    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:38.010109    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:38.010109    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:38.010109    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:38.010109    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:38 GMT
	I0719 05:33:38.010109    2708 round_trippers.go:580]     Audit-Id: cf1ff559-230f-456b-9081-5421b555cd80
	I0719 05:33:38.010109    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:38.010109    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"394","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0719 05:33:38.502267    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:33:38.502526    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:38.502526    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:38.502526    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:38.506332    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:38.506918    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:38.506918    2708 round_trippers.go:580]     Audit-Id: 44b12ef5-d1e0-455e-926e-9fed777dfb80
	I0719 05:33:38.506918    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:38.506918    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:38.506918    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:38.506918    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:38.507019    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:38 GMT
	I0719 05:33:38.507199    2708 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"398","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0719 05:33:38.508027    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:38.508027    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:38.508027    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:38.508027    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:38.510628    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:33:38.510628    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:38.510628    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:38.510628    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:38.510628    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:38.510628    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:38 GMT
	I0719 05:33:38.511451    2708 round_trippers.go:580]     Audit-Id: 30bde929-e660-4464-ad4b-7a9d6fc4fb19
	I0719 05:33:38.511451    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:38.511736    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"394","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0719 05:33:39.004988    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:33:39.004988    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:39.004988    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:39.004988    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:39.009647    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:33:39.009985    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:39.009985    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:39.009985    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:39 GMT
	I0719 05:33:39.009985    2708 round_trippers.go:580]     Audit-Id: 4eb86c9d-fea2-44b4-a3b4-3e8375f01fe1
	I0719 05:33:39.009985    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:39.009985    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:39.009985    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:39.010381    2708 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"413","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0719 05:33:39.011208    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:39.011208    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:39.011208    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:39.011208    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:39.014656    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:39.014759    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:39.014777    2708 round_trippers.go:580]     Audit-Id: 334f850a-52af-4a3b-822f-db02b9d6283a
	I0719 05:33:39.014777    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:39.014777    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:39.014777    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:39.014777    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:39.014777    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:39 GMT
	I0719 05:33:39.015913    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"394","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0719 05:33:39.016178    2708 pod_ready.go:92] pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace has status "Ready":"True"
	I0719 05:33:39.016178    2708 pod_ready.go:81] duration metric: took 2.0170569s for pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace to be "Ready" ...
	I0719 05:33:39.016178    2708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:33:39.016178    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-761300
	I0719 05:33:39.016178    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:39.016178    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:39.016178    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:39.020868    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:33:39.020868    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:39.020868    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:39.020868    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:39.020868    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:39 GMT
	I0719 05:33:39.020868    2708 round_trippers.go:580]     Audit-Id: e0651eec-46de-4a74-8590-56354bfa2d99
	I0719 05:33:39.020868    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:39.020868    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:39.020868    2708 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-761300","namespace":"kube-system","uid":"a2361ae1-fa19-4fed-9917-abc94c9107aa","resourceVersion":"285","creationTimestamp":"2024-07-19T05:32:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.162.16:2379","kubernetes.io/config.hash":"1436f6b96e809d6c17e4b090c15cf220","kubernetes.io/config.mirror":"1436f6b96e809d6c17e4b090c15cf220","kubernetes.io/config.seen":"2024-07-19T05:32:54.007753868Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:32:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0719 05:33:39.021795    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:39.021795    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:39.021795    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:39.021795    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:39.024667    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:33:39.024667    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:39.024667    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:39.024667    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:39.024667    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:39 GMT
	I0719 05:33:39.024667    2708 round_trippers.go:580]     Audit-Id: a01249e8-b79a-45d4-aa16-8cb6fc8d75d1
	I0719 05:33:39.024667    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:39.024667    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:39.024667    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"394","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0719 05:33:39.025600    2708 pod_ready.go:92] pod "etcd-multinode-761300" in "kube-system" namespace has status "Ready":"True"
	I0719 05:33:39.025600    2708 pod_ready.go:81] duration metric: took 9.4218ms for pod "etcd-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:33:39.025662    2708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:33:39.025768    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-761300
	I0719 05:33:39.025768    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:39.025768    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:39.025768    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:39.028390    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:33:39.028390    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:39.028390    2708 round_trippers.go:580]     Audit-Id: efae8eb6-9cba-4859-91ca-d221c8e280a3
	I0719 05:33:39.028390    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:39.028390    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:39.028720    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:39.028761    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:39.028761    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:39 GMT
	I0719 05:33:39.028821    2708 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-761300","namespace":"kube-system","uid":"36919164-4b0f-48b4-b71b-024def806c8d","resourceVersion":"281","creationTimestamp":"2024-07-19T05:33:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.162.16:8443","kubernetes.io/config.hash":"b3cb2b1621f72c668585d21689da850a","kubernetes.io/config.mirror":"b3cb2b1621f72c668585d21689da850a","kubernetes.io/config.seen":"2024-07-19T05:33:02.001206567Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0719 05:33:39.029532    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:39.029532    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:39.029532    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:39.029532    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:39.031855    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:33:39.031855    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:39.031855    2708 round_trippers.go:580]     Audit-Id: 892514bb-eea6-4ce1-a1d5-015dcd2568c3
	I0719 05:33:39.031855    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:39.031855    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:39.031855    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:39.031855    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:39.031855    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:39 GMT
	I0719 05:33:39.032579    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"394","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0719 05:33:39.032790    2708 pod_ready.go:92] pod "kube-apiserver-multinode-761300" in "kube-system" namespace has status "Ready":"True"
	I0719 05:33:39.032790    2708 pod_ready.go:81] duration metric: took 7.1283ms for pod "kube-apiserver-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:33:39.032790    2708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:33:39.032790    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-761300
	I0719 05:33:39.032790    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:39.032790    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:39.032790    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:39.036395    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:39.036693    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:39.036693    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:39.036693    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:39.036693    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:39.036693    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:39 GMT
	I0719 05:33:39.036693    2708 round_trippers.go:580]     Audit-Id: dbce1c3d-bc19-4af7-87d7-c19fa5f4ba83
	I0719 05:33:39.036693    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:39.037587    2708 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-761300","namespace":"kube-system","uid":"2124834c-1961-49fb-8699-fba2fc5dd0ac","resourceVersion":"280","creationTimestamp":"2024-07-19T05:33:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"91d2984bea90586f6ba6d94e358920eb","kubernetes.io/config.mirror":"91d2984bea90586f6ba6d94e358920eb","kubernetes.io/config.seen":"2024-07-19T05:33:02.001207967Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0719 05:33:39.038222    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:39.038222    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:39.038222    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:39.038222    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:39.041033    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:33:39.041033    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:39.042055    2708 round_trippers.go:580]     Audit-Id: 531546e5-d28b-453a-89e0-ca03e26cc79f
	I0719 05:33:39.042055    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:39.042055    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:39.042055    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:39.042055    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:39.042093    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:39 GMT
	I0719 05:33:39.042238    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"394","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0719 05:33:39.042354    2708 pod_ready.go:92] pod "kube-controller-manager-multinode-761300" in "kube-system" namespace has status "Ready":"True"
	I0719 05:33:39.042354    2708 pod_ready.go:81] duration metric: took 9.5639ms for pod "kube-controller-manager-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:33:39.042354    2708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c4z7f" in "kube-system" namespace to be "Ready" ...
	I0719 05:33:39.042354    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4z7f
	I0719 05:33:39.042354    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:39.042354    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:39.042354    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:39.046977    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:33:39.046977    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:39.047097    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:39.047141    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:39 GMT
	I0719 05:33:39.047141    2708 round_trippers.go:580]     Audit-Id: 93da1eae-1d74-45f5-8460-aaa81a44f5c4
	I0719 05:33:39.047141    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:39.047141    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:39.047141    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:39.048665    2708 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c4z7f","generateName":"kube-proxy-","namespace":"kube-system","uid":"17ff8aac-2d57-44fb-a3ec-f0d6ea181881","resourceVersion":"368","creationTimestamp":"2024-07-19T05:33:15Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06c026b7-a7b7-4276-a86c-fc9c51f31e4e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06c026b7-a7b7-4276-a86c-fc9c51f31e4e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0719 05:33:39.048912    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:39.048912    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:39.048912    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:39.048912    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:39.052181    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:39.052181    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:39.052181    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:39.052181    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:39 GMT
	I0719 05:33:39.052181    2708 round_trippers.go:580]     Audit-Id: 646ba20a-0bab-4d0e-92a7-97b186755ed6
	I0719 05:33:39.052181    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:39.052181    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:39.052181    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:39.053064    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"394","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0719 05:33:39.053064    2708 pod_ready.go:92] pod "kube-proxy-c4z7f" in "kube-system" namespace has status "Ready":"True"
	I0719 05:33:39.053064    2708 pod_ready.go:81] duration metric: took 10.7102ms for pod "kube-proxy-c4z7f" in "kube-system" namespace to be "Ready" ...
	I0719 05:33:39.053064    2708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:33:39.208021    2708 request.go:629] Waited for 154.9544ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-761300
	I0719 05:33:39.208369    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-761300
	I0719 05:33:39.208398    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:39.208398    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:39.208398    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:39.211733    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:39.211733    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:39.211733    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:39.211733    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:39.211733    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:39.211733    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:39.211733    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:39 GMT
	I0719 05:33:39.211733    2708 round_trippers.go:580]     Audit-Id: c39642fc-269a-4273-9d12-442b692538fc
	I0719 05:33:39.213756    2708 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-761300","namespace":"kube-system","uid":"49a739d1-1ae3-4a41-aebc-0eb7b2b4f242","resourceVersion":"287","creationTimestamp":"2024-07-19T05:33:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"baa57cf06d1c9cb3264d7de745e86d00","kubernetes.io/config.mirror":"baa57cf06d1c9cb3264d7de745e86d00","kubernetes.io/config.seen":"2024-07-19T05:33:02.001209067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0719 05:33:39.411782    2708 request.go:629] Waited for 196.5936ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:39.411940    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:33:39.411940    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:39.411940    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:39.411940    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:39.416517    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:33:39.417045    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:39.417171    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:39 GMT
	I0719 05:33:39.417171    2708 round_trippers.go:580]     Audit-Id: 74863f4a-34d9-4097-8f80-7e1394a4de3f
	I0719 05:33:39.417229    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:39.417229    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:39.417229    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:39.417229    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:39.417229    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"394","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0719 05:33:39.418052    2708 pod_ready.go:92] pod "kube-scheduler-multinode-761300" in "kube-system" namespace has status "Ready":"True"
	I0719 05:33:39.418052    2708 pod_ready.go:81] duration metric: took 364.9838ms for pod "kube-scheduler-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:33:39.418052    2708 pod_ready.go:38] duration metric: took 2.4338449s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:33:39.418052    2708 api_server.go:52] waiting for apiserver process to appear ...
	I0719 05:33:39.431036    2708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:33:39.457582    2708 command_runner.go:130] > 2108
	I0719 05:33:39.458221    2708 api_server.go:72] duration metric: took 23.8440882s to wait for apiserver process to appear ...
	I0719 05:33:39.458221    2708 api_server.go:88] waiting for apiserver healthz status ...
	I0719 05:33:39.458221    2708 api_server.go:253] Checking apiserver healthz at https://172.28.162.16:8443/healthz ...
	I0719 05:33:39.468196    2708 api_server.go:279] https://172.28.162.16:8443/healthz returned 200:
	ok
	I0719 05:33:39.468196    2708 round_trippers.go:463] GET https://172.28.162.16:8443/version
	I0719 05:33:39.468196    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:39.468196    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:39.468196    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:39.469998    2708 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 05:33:39.469998    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:39.469998    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:39.469998    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:39.469998    2708 round_trippers.go:580]     Content-Length: 263
	I0719 05:33:39.469998    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:39 GMT
	I0719 05:33:39.469998    2708 round_trippers.go:580]     Audit-Id: 4ef91a41-e554-4440-b3a8-1feb367cb4f2
	I0719 05:33:39.469998    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:39.470260    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:39.470260    2708 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0719 05:33:39.470260    2708 api_server.go:141] control plane version: v1.30.3
	I0719 05:33:39.470260    2708 api_server.go:131] duration metric: took 12.0388ms to wait for apiserver health ...
	I0719 05:33:39.470260    2708 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 05:33:39.614657    2708 request.go:629] Waited for 144.208ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods
	I0719 05:33:39.614937    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods
	I0719 05:33:39.614937    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:39.614937    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:39.614937    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:39.620420    2708 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:33:39.620420    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:39.620420    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:39 GMT
	I0719 05:33:39.620847    2708 round_trippers.go:580]     Audit-Id: 07344008-93e0-41f1-8ceb-367df05b00b3
	I0719 05:33:39.620847    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:39.620847    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:39.620847    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:39.620847    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:39.622654    2708 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"413","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0719 05:33:39.625312    2708 system_pods.go:59] 8 kube-system pods found
	I0719 05:33:39.625611    2708 system_pods.go:61] "coredns-7db6d8ff4d-hw9kh" [d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4] Running
	I0719 05:33:39.625611    2708 system_pods.go:61] "etcd-multinode-761300" [a2361ae1-fa19-4fed-9917-abc94c9107aa] Running
	I0719 05:33:39.625611    2708 system_pods.go:61] "kindnet-dj497" [124722d1-6c9c-4de4-b242-2f58e89b223b] Running
	I0719 05:33:39.625611    2708 system_pods.go:61] "kube-apiserver-multinode-761300" [36919164-4b0f-48b4-b71b-024def806c8d] Running
	I0719 05:33:39.625611    2708 system_pods.go:61] "kube-controller-manager-multinode-761300" [2124834c-1961-49fb-8699-fba2fc5dd0ac] Running
	I0719 05:33:39.625611    2708 system_pods.go:61] "kube-proxy-c4z7f" [17ff8aac-2d57-44fb-a3ec-f0d6ea181881] Running
	I0719 05:33:39.625611    2708 system_pods.go:61] "kube-scheduler-multinode-761300" [49a739d1-1ae3-4a41-aebc-0eb7b2b4f242] Running
	I0719 05:33:39.625611    2708 system_pods.go:61] "storage-provisioner" [87c864ea-0853-481c-ab24-2ab209760f69] Running
	I0719 05:33:39.625611    2708 system_pods.go:74] duration metric: took 155.3487ms to wait for pod list to return data ...
	I0719 05:33:39.625611    2708 default_sa.go:34] waiting for default service account to be created ...
	I0719 05:33:39.818018    2708 request.go:629] Waited for 191.8528ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.16:8443/api/v1/namespaces/default/serviceaccounts
	I0719 05:33:39.818263    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/default/serviceaccounts
	I0719 05:33:39.818374    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:39.818374    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:39.818452    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:39.820793    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:33:39.820793    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:39.820793    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:39.820793    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:39.820793    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:39.820793    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:39.820793    2708 round_trippers.go:580]     Content-Length: 261
	I0719 05:33:39.821570    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:39 GMT
	I0719 05:33:39.821570    2708 round_trippers.go:580]     Audit-Id: 63d118c3-0fde-47fb-96d8-907560be18f9
	I0719 05:33:39.821570    2708 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"401ce23d-5c82-4e9b-b140-9f6a95fa53e6","resourceVersion":"308","creationTimestamp":"2024-07-19T05:33:15Z"}}]}
	I0719 05:33:39.821897    2708 default_sa.go:45] found service account: "default"
	I0719 05:33:39.821897    2708 default_sa.go:55] duration metric: took 196.2842ms for default service account to be created ...
	I0719 05:33:39.821897    2708 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 05:33:40.005880    2708 request.go:629] Waited for 183.6257ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods
	I0719 05:33:40.005991    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods
	I0719 05:33:40.005991    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:40.005991    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:40.006234    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:40.009987    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:40.009987    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:40.009987    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:40 GMT
	I0719 05:33:40.009987    2708 round_trippers.go:580]     Audit-Id: d038e09b-aa4c-4d7f-bdcc-5679e89e47cb
	I0719 05:33:40.009987    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:40.011202    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:40.011202    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:40.011202    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:40.012159    2708 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"418"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"413","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0719 05:33:40.014792    2708 system_pods.go:86] 8 kube-system pods found
	I0719 05:33:40.014792    2708 system_pods.go:89] "coredns-7db6d8ff4d-hw9kh" [d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4] Running
	I0719 05:33:40.014792    2708 system_pods.go:89] "etcd-multinode-761300" [a2361ae1-fa19-4fed-9917-abc94c9107aa] Running
	I0719 05:33:40.014792    2708 system_pods.go:89] "kindnet-dj497" [124722d1-6c9c-4de4-b242-2f58e89b223b] Running
	I0719 05:33:40.014792    2708 system_pods.go:89] "kube-apiserver-multinode-761300" [36919164-4b0f-48b4-b71b-024def806c8d] Running
	I0719 05:33:40.014792    2708 system_pods.go:89] "kube-controller-manager-multinode-761300" [2124834c-1961-49fb-8699-fba2fc5dd0ac] Running
	I0719 05:33:40.014792    2708 system_pods.go:89] "kube-proxy-c4z7f" [17ff8aac-2d57-44fb-a3ec-f0d6ea181881] Running
	I0719 05:33:40.014792    2708 system_pods.go:89] "kube-scheduler-multinode-761300" [49a739d1-1ae3-4a41-aebc-0eb7b2b4f242] Running
	I0719 05:33:40.014792    2708 system_pods.go:89] "storage-provisioner" [87c864ea-0853-481c-ab24-2ab209760f69] Running
	I0719 05:33:40.014792    2708 system_pods.go:126] duration metric: took 192.8926ms to wait for k8s-apps to be running ...
	I0719 05:33:40.014792    2708 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 05:33:40.025315    2708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 05:33:40.060235    2708 system_svc.go:56] duration metric: took 45.4422ms WaitForService to wait for kubelet
	I0719 05:33:40.060235    2708 kubeadm.go:582] duration metric: took 24.4460947s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 05:33:40.060235    2708 node_conditions.go:102] verifying NodePressure condition ...
	I0719 05:33:40.209240    2708 request.go:629] Waited for 149.0031ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.16:8443/api/v1/nodes
	I0719 05:33:40.209447    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes
	I0719 05:33:40.209447    2708 round_trippers.go:469] Request Headers:
	I0719 05:33:40.209551    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:33:40.209583    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:33:40.212942    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:33:40.212942    2708 round_trippers.go:577] Response Headers:
	I0719 05:33:40.212942    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:33:40 GMT
	I0719 05:33:40.212942    2708 round_trippers.go:580]     Audit-Id: b28f36e3-9e3e-41f5-88da-11db883fe9ec
	I0719 05:33:40.212942    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:33:40.213858    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:33:40.213858    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:33:40.213858    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:33:40.214023    2708 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"419"},"items":[{"metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"394","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5012 chars]
	I0719 05:33:40.214628    2708 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 05:33:40.214628    2708 node_conditions.go:123] node cpu capacity is 2
	I0719 05:33:40.214628    2708 node_conditions.go:105] duration metric: took 154.3908ms to run NodePressure ...
	I0719 05:33:40.214628    2708 start.go:241] waiting for startup goroutines ...
	I0719 05:33:40.214628    2708 start.go:246] waiting for cluster config update ...
	I0719 05:33:40.214628    2708 start.go:255] writing updated cluster config ...
	I0719 05:33:40.221736    2708 out.go:177] 
	I0719 05:33:40.225315    2708 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:33:40.231326    2708 config.go:182] Loaded profile config "multinode-761300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:33:40.231326    2708 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\config.json ...
	I0719 05:33:40.236722    2708 out.go:177] * Starting "multinode-761300-m02" worker node in "multinode-761300" cluster
	I0719 05:33:40.239718    2708 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 05:33:40.239718    2708 cache.go:56] Caching tarball of preloaded images
	I0719 05:33:40.240503    2708 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 05:33:40.240503    2708 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 05:33:40.240829    2708 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\config.json ...
	I0719 05:33:40.245120    2708 start.go:360] acquireMachinesLock for multinode-761300-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 05:33:40.245120    2708 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-761300-m02"
	I0719 05:33:40.246161    2708 start.go:93] Provisioning new machine with config: &{Name:multinode-761300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:multinode-761300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.162.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0719 05:33:40.246161    2708 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0719 05:33:40.250502    2708 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 05:33:40.250502    2708 start.go:159] libmachine.API.Create for "multinode-761300" (driver="hyperv")
	I0719 05:33:40.251020    2708 client.go:168] LocalClient.Create starting
	I0719 05:33:40.251175    2708 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0719 05:33:40.251175    2708 main.go:141] libmachine: Decoding PEM data...
	I0719 05:33:40.251716    2708 main.go:141] libmachine: Parsing certificate...
	I0719 05:33:40.251972    2708 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0719 05:33:40.252233    2708 main.go:141] libmachine: Decoding PEM data...
	I0719 05:33:40.252233    2708 main.go:141] libmachine: Parsing certificate...
	I0719 05:33:40.252506    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0719 05:33:42.218519    2708 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0719 05:33:42.219096    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:33:42.219096    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0719 05:33:43.955955    2708 main.go:141] libmachine: [stdout =====>] : False
	
	I0719 05:33:43.955955    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:33:43.956033    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0719 05:33:45.456040    2708 main.go:141] libmachine: [stdout =====>] : True
	
	I0719 05:33:45.456970    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:33:45.456970    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0719 05:33:49.169263    2708 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0719 05:33:49.169479    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:33:49.171614    2708 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0719 05:33:49.594940    2708 main.go:141] libmachine: Creating SSH key...
	I0719 05:33:49.840076    2708 main.go:141] libmachine: Creating VM...
	I0719 05:33:49.840076    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0719 05:33:52.867535    2708 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0719 05:33:52.867535    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:33:52.868577    2708 main.go:141] libmachine: Using switch "Default Switch"
	I0719 05:33:52.868621    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0719 05:33:54.681632    2708 main.go:141] libmachine: [stdout =====>] : True
	
	I0719 05:33:54.681632    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:33:54.681632    2708 main.go:141] libmachine: Creating VHD
	I0719 05:33:54.681840    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0719 05:33:58.564744    2708 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 80443C87-A0BC-4D3C-9D44-2B5B9C9E2F09
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0719 05:33:58.565589    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:33:58.565678    2708 main.go:141] libmachine: Writing magic tar header
	I0719 05:33:58.565678    2708 main.go:141] libmachine: Writing SSH key tar header
	I0719 05:33:58.575130    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0719 05:34:01.813446    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:34:01.813446    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:01.814239    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02\disk.vhd' -SizeBytes 20000MB
	I0719 05:34:04.429203    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:34:04.429544    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:04.429649    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-761300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0719 05:34:08.193956    2708 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-761300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0719 05:34:08.194937    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:08.195023    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-761300-m02 -DynamicMemoryEnabled $false
	I0719 05:34:10.485396    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:34:10.485396    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:10.485665    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-761300-m02 -Count 2
	I0719 05:34:12.722680    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:34:12.722680    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:12.722835    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-761300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02\boot2docker.iso'
	I0719 05:34:15.348355    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:34:15.348450    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:15.348450    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-761300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02\disk.vhd'
	I0719 05:34:18.083753    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:34:18.083753    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:18.083753    2708 main.go:141] libmachine: Starting VM...
	I0719 05:34:18.083753    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-761300-m02
	I0719 05:34:21.221975    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:34:21.221975    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:21.221975    2708 main.go:141] libmachine: Waiting for host to start...
	I0719 05:34:21.223355    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:34:23.612076    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:34:23.612076    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:23.612076    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:34:26.201266    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:34:26.201331    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:27.203469    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:34:29.520381    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:34:29.520692    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:29.520692    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:34:32.109352    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:34:32.109635    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:33.113086    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:34:35.415523    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:34:35.415523    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:35.415678    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:34:38.024683    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:34:38.024683    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:39.030894    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:34:41.350465    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:34:41.350465    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:41.351035    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:34:43.950535    2708 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:34:43.951553    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:44.953484    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:34:47.249077    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:34:47.249077    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:47.249077    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:34:49.919873    2708 main.go:141] libmachine: [stdout =====>] : 172.28.167.151
	
	I0719 05:34:49.919873    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:49.920510    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:34:52.125246    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:34:52.125246    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:52.125704    2708 machine.go:94] provisionDockerMachine start ...
	I0719 05:34:52.125704    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:34:54.341927    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:34:54.341927    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:54.342108    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:34:56.980559    2708 main.go:141] libmachine: [stdout =====>] : 172.28.167.151
	
	I0719 05:34:56.980559    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:56.987735    2708 main.go:141] libmachine: Using SSH client type: native
	I0719 05:34:56.988263    2708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.167.151 22 <nil> <nil>}
	I0719 05:34:56.988263    2708 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 05:34:57.118515    2708 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 05:34:57.118671    2708 buildroot.go:166] provisioning hostname "multinode-761300-m02"
	I0719 05:34:57.118797    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:34:59.330929    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:34:59.330929    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:34:59.331758    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:35:01.969844    2708 main.go:141] libmachine: [stdout =====>] : 172.28.167.151
	
	I0719 05:35:01.970008    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:01.975357    2708 main.go:141] libmachine: Using SSH client type: native
	I0719 05:35:01.975357    2708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.167.151 22 <nil> <nil>}
	I0719 05:35:01.975357    2708 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-761300-m02 && echo "multinode-761300-m02" | sudo tee /etc/hostname
	I0719 05:35:02.143062    2708 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-761300-m02
	
	I0719 05:35:02.143062    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:35:04.345968    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:35:04.345968    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:04.346109    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:35:06.960035    2708 main.go:141] libmachine: [stdout =====>] : 172.28.167.151
	
	I0719 05:35:06.960086    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:06.965121    2708 main.go:141] libmachine: Using SSH client type: native
	I0719 05:35:06.965275    2708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.167.151 22 <nil> <nil>}
	I0719 05:35:06.965275    2708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-761300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-761300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-761300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 05:35:07.120888    2708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 05:35:07.120888    2708 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 05:35:07.120888    2708 buildroot.go:174] setting up certificates
	I0719 05:35:07.120888    2708 provision.go:84] configureAuth start
	I0719 05:35:07.120888    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:35:09.396030    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:35:09.396530    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:09.396530    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:35:12.005675    2708 main.go:141] libmachine: [stdout =====>] : 172.28.167.151
	
	I0719 05:35:12.005675    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:12.005995    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:35:14.201683    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:35:14.201683    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:14.201978    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:35:16.816967    2708 main.go:141] libmachine: [stdout =====>] : 172.28.167.151
	
	I0719 05:35:16.816967    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:16.816967    2708 provision.go:143] copyHostCerts
	I0719 05:35:16.816967    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0719 05:35:16.817503    2708 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0719 05:35:16.817616    2708 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0719 05:35:16.817982    2708 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 05:35:16.819323    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0719 05:35:16.819629    2708 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0719 05:35:16.819690    2708 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0719 05:35:16.820116    2708 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 05:35:16.821128    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0719 05:35:16.821196    2708 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0719 05:35:16.821196    2708 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0719 05:35:16.821729    2708 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 05:35:16.822759    2708 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-761300-m02 san=[127.0.0.1 172.28.167.151 localhost minikube multinode-761300-m02]
	I0719 05:35:16.999002    2708 provision.go:177] copyRemoteCerts
	I0719 05:35:17.012340    2708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 05:35:17.012340    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:35:19.217591    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:35:19.217591    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:19.217591    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:35:21.842819    2708 main.go:141] libmachine: [stdout =====>] : 172.28.167.151
	
	I0719 05:35:21.842819    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:21.843251    2708 sshutil.go:53] new ssh client: &{IP:172.28.167.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02\id_rsa Username:docker}
	I0719 05:35:21.946898    2708 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9344998s)
	I0719 05:35:21.946898    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0719 05:35:21.946898    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0719 05:35:21.996622    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0719 05:35:21.997249    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 05:35:22.042517    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0719 05:35:22.042897    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 05:35:22.093677    2708 provision.go:87] duration metric: took 14.9726114s to configureAuth
	I0719 05:35:22.093677    2708 buildroot.go:189] setting minikube options for container-runtime
	I0719 05:35:22.094333    2708 config.go:182] Loaded profile config "multinode-761300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:35:22.094488    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:35:24.293752    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:35:24.293752    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:24.293752    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:35:26.927321    2708 main.go:141] libmachine: [stdout =====>] : 172.28.167.151
	
	I0719 05:35:26.927585    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:26.933537    2708 main.go:141] libmachine: Using SSH client type: native
	I0719 05:35:26.934087    2708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.167.151 22 <nil> <nil>}
	I0719 05:35:26.934087    2708 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 05:35:27.069250    2708 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 05:35:27.069334    2708 buildroot.go:70] root file system type: tmpfs
	I0719 05:35:27.069487    2708 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 05:35:27.069579    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:35:29.301268    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:35:29.301685    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:29.301742    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:35:31.930151    2708 main.go:141] libmachine: [stdout =====>] : 172.28.167.151
	
	I0719 05:35:31.930151    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:31.936790    2708 main.go:141] libmachine: Using SSH client type: native
	I0719 05:35:31.937583    2708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.167.151 22 <nil> <nil>}
	I0719 05:35:31.937583    2708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.162.16"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 05:35:32.096841    2708 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.162.16
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 05:35:32.096922    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:35:34.393340    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:35:34.393340    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:34.393340    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:35:37.100854    2708 main.go:141] libmachine: [stdout =====>] : 172.28.167.151
	
	I0719 05:35:37.100854    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:37.106324    2708 main.go:141] libmachine: Using SSH client type: native
	I0719 05:35:37.107075    2708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.167.151 22 <nil> <nil>}
	I0719 05:35:37.107075    2708 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 05:35:39.383458    2708 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 05:35:39.383458    2708 machine.go:97] duration metric: took 47.2571918s to provisionDockerMachine
	I0719 05:35:39.383458    2708 client.go:171] duration metric: took 1m59.1310203s to LocalClient.Create
	I0719 05:35:39.383458    2708 start.go:167] duration metric: took 1m59.1315382s to libmachine.API.Create "multinode-761300"
	I0719 05:35:39.383458    2708 start.go:293] postStartSetup for "multinode-761300-m02" (driver="hyperv")
	I0719 05:35:39.384015    2708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 05:35:39.405309    2708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 05:35:39.405309    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:35:41.612981    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:35:41.612981    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:41.612981    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:35:44.290523    2708 main.go:141] libmachine: [stdout =====>] : 172.28.167.151
	
	I0719 05:35:44.290633    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:44.291084    2708 sshutil.go:53] new ssh client: &{IP:172.28.167.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02\id_rsa Username:docker}
	I0719 05:35:44.394295    2708 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9888123s)
	I0719 05:35:44.407294    2708 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 05:35:44.414676    2708 command_runner.go:130] > NAME=Buildroot
	I0719 05:35:44.414753    2708 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 05:35:44.414753    2708 command_runner.go:130] > ID=buildroot
	I0719 05:35:44.414753    2708 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 05:35:44.414753    2708 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 05:35:44.414850    2708 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 05:35:44.414850    2708 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 05:35:44.415378    2708 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 05:35:44.416545    2708 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
	I0719 05:35:44.416604    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /etc/ssl/certs/96042.pem
	I0719 05:35:44.430694    2708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 05:35:44.449083    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
	I0719 05:35:44.495208    2708 start.go:296] duration metric: took 5.1116893s for postStartSetup
	I0719 05:35:44.498538    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:35:46.699198    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:35:46.700105    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:46.700105    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:35:49.335467    2708 main.go:141] libmachine: [stdout =====>] : 172.28.167.151
	
	I0719 05:35:49.335467    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:49.336454    2708 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\config.json ...
	I0719 05:35:49.338761    2708 start.go:128] duration metric: took 2m9.0910639s to createHost
	I0719 05:35:49.338991    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:35:51.527273    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:35:51.527273    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:51.527273    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:35:54.139883    2708 main.go:141] libmachine: [stdout =====>] : 172.28.167.151
	
	I0719 05:35:54.140589    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:54.145765    2708 main.go:141] libmachine: Using SSH client type: native
	I0719 05:35:54.145940    2708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.167.151 22 <nil> <nil>}
	I0719 05:35:54.145940    2708 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 05:35:54.282720    2708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721367354.280722364
	
	I0719 05:35:54.282720    2708 fix.go:216] guest clock: 1721367354.280722364
	I0719 05:35:54.282720    2708 fix.go:229] Guest: 2024-07-19 05:35:54.280722364 +0000 UTC Remote: 2024-07-19 05:35:49.3389913 +0000 UTC m=+357.033361701 (delta=4.941731064s)
	I0719 05:35:54.282862    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:35:56.478931    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:35:56.479337    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:56.479337    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:35:59.097920    2708 main.go:141] libmachine: [stdout =====>] : 172.28.167.151
	
	I0719 05:35:59.098221    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:35:59.103649    2708 main.go:141] libmachine: Using SSH client type: native
	I0719 05:35:59.104434    2708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.167.151 22 <nil> <nil>}
	I0719 05:35:59.104434    2708 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721367354
	I0719 05:35:59.256292    2708 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 05:35:54 UTC 2024
	
	I0719 05:35:59.256292    2708 fix.go:236] clock set: Fri Jul 19 05:35:54 UTC 2024
	 (err=<nil>)
	I0719 05:35:59.256292    2708 start.go:83] releasing machines lock for "multinode-761300-m02", held for 2m19.0085818s
	I0719 05:35:59.256957    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:36:01.441741    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:36:01.441741    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:36:01.442121    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:36:04.107614    2708 main.go:141] libmachine: [stdout =====>] : 172.28.167.151
	
	I0719 05:36:04.107614    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:36:04.111569    2708 out.go:177] * Found network options:
	I0719 05:36:04.116185    2708 out.go:177]   - NO_PROXY=172.28.162.16
	W0719 05:36:04.118837    2708 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 05:36:04.121473    2708 out.go:177]   - NO_PROXY=172.28.162.16
	W0719 05:36:04.123716    2708 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 05:36:04.124180    2708 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 05:36:04.127527    2708 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 05:36:04.127527    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:36:04.136810    2708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 05:36:04.136810    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:36:06.441893    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:36:06.441893    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:36:06.441893    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:36:06.442181    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:36:06.442241    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:36:06.442241    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:36:09.183971    2708 main.go:141] libmachine: [stdout =====>] : 172.28.167.151
	
	I0719 05:36:09.183971    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:36:09.185180    2708 sshutil.go:53] new ssh client: &{IP:172.28.167.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02\id_rsa Username:docker}
	I0719 05:36:09.227185    2708 main.go:141] libmachine: [stdout =====>] : 172.28.167.151
	
	I0719 05:36:09.227185    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:36:09.227729    2708 sshutil.go:53] new ssh client: &{IP:172.28.167.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02\id_rsa Username:docker}
	I0719 05:36:09.290439    2708 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0719 05:36:09.291546    2708 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.1546755s)
	W0719 05:36:09.291670    2708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 05:36:09.303771    2708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 05:36:09.308755    2708 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0719 05:36:09.309902    2708 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1823141s)
	W0719 05:36:09.310003    2708 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 05:36:09.339735    2708 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0719 05:36:09.339819    2708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 05:36:09.339819    2708 start.go:495] detecting cgroup driver to use...
	I0719 05:36:09.339819    2708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 05:36:09.377896    2708 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0719 05:36:09.391117    2708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	W0719 05:36:09.420782    2708 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 05:36:09.420905    2708 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 05:36:09.425221    2708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 05:36:09.445500    2708 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 05:36:09.457184    2708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 05:36:09.491250    2708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 05:36:09.525129    2708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 05:36:09.556465    2708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 05:36:09.587882    2708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 05:36:09.619894    2708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 05:36:09.650835    2708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 05:36:09.690428    2708 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 05:36:09.721455    2708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 05:36:09.740784    2708 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 05:36:09.755679    2708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 05:36:09.788560    2708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:36:09.997964    2708 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 05:36:10.031325    2708 start.go:495] detecting cgroup driver to use...
	I0719 05:36:10.044754    2708 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 05:36:10.070704    2708 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0719 05:36:10.071449    2708 command_runner.go:130] > [Unit]
	I0719 05:36:10.071449    2708 command_runner.go:130] > Description=Docker Application Container Engine
	I0719 05:36:10.071449    2708 command_runner.go:130] > Documentation=https://docs.docker.com
	I0719 05:36:10.071516    2708 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0719 05:36:10.071516    2708 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0719 05:36:10.071516    2708 command_runner.go:130] > StartLimitBurst=3
	I0719 05:36:10.071516    2708 command_runner.go:130] > StartLimitIntervalSec=60
	I0719 05:36:10.071575    2708 command_runner.go:130] > [Service]
	I0719 05:36:10.071575    2708 command_runner.go:130] > Type=notify
	I0719 05:36:10.071575    2708 command_runner.go:130] > Restart=on-failure
	I0719 05:36:10.071575    2708 command_runner.go:130] > Environment=NO_PROXY=172.28.162.16
	I0719 05:36:10.071575    2708 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0719 05:36:10.071712    2708 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0719 05:36:10.071712    2708 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0719 05:36:10.071712    2708 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0719 05:36:10.071778    2708 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0719 05:36:10.071778    2708 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0719 05:36:10.071778    2708 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0719 05:36:10.071842    2708 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0719 05:36:10.071842    2708 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0719 05:36:10.071842    2708 command_runner.go:130] > ExecStart=
	I0719 05:36:10.071907    2708 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0719 05:36:10.071907    2708 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0719 05:36:10.071970    2708 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0719 05:36:10.071970    2708 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0719 05:36:10.071970    2708 command_runner.go:130] > LimitNOFILE=infinity
	I0719 05:36:10.071970    2708 command_runner.go:130] > LimitNPROC=infinity
	I0719 05:36:10.071970    2708 command_runner.go:130] > LimitCORE=infinity
	I0719 05:36:10.071970    2708 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0719 05:36:10.072053    2708 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0719 05:36:10.072053    2708 command_runner.go:130] > TasksMax=infinity
	I0719 05:36:10.072053    2708 command_runner.go:130] > TimeoutStartSec=0
	I0719 05:36:10.072053    2708 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0719 05:36:10.072053    2708 command_runner.go:130] > Delegate=yes
	I0719 05:36:10.072053    2708 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0719 05:36:10.072122    2708 command_runner.go:130] > KillMode=process
	I0719 05:36:10.072122    2708 command_runner.go:130] > [Install]
	I0719 05:36:10.072122    2708 command_runner.go:130] > WantedBy=multi-user.target
	I0719 05:36:10.084390    2708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 05:36:10.120841    2708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 05:36:10.168719    2708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 05:36:10.204453    2708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 05:36:10.239012    2708 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 05:36:10.306182    2708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 05:36:10.331041    2708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 05:36:10.365328    2708 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0719 05:36:10.377548    2708 ssh_runner.go:195] Run: which cri-dockerd
	I0719 05:36:10.383953    2708 command_runner.go:130] > /usr/bin/cri-dockerd
	I0719 05:36:10.394798    2708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 05:36:10.412493    2708 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 05:36:10.455114    2708 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 05:36:10.651791    2708 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 05:36:10.846260    2708 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 05:36:10.846412    2708 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 05:36:10.890327    2708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:36:11.092056    2708 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 05:36:13.707103    2708 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6150158s)
	I0719 05:36:13.721026    2708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 05:36:13.756769    2708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 05:36:13.790954    2708 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 05:36:14.003293    2708 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 05:36:14.214567    2708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:36:14.409500    2708 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 05:36:14.449519    2708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 05:36:14.482744    2708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:36:14.679314    2708 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 05:36:14.786485    2708 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 05:36:14.798461    2708 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 05:36:14.807112    2708 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0719 05:36:14.807176    2708 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0719 05:36:14.807176    2708 command_runner.go:130] > Device: 0,22	Inode: 889         Links: 1
	I0719 05:36:14.807176    2708 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0719 05:36:14.807176    2708 command_runner.go:130] > Access: 2024-07-19 05:36:14.717176353 +0000
	I0719 05:36:14.807176    2708 command_runner.go:130] > Modify: 2024-07-19 05:36:14.717176353 +0000
	I0719 05:36:14.807176    2708 command_runner.go:130] > Change: 2024-07-19 05:36:14.721176357 +0000
	I0719 05:36:14.807267    2708 command_runner.go:130] >  Birth: -
	I0719 05:36:14.807267    2708 start.go:563] Will wait 60s for crictl version
	I0719 05:36:14.819339    2708 ssh_runner.go:195] Run: which crictl
	I0719 05:36:14.824934    2708 command_runner.go:130] > /usr/bin/crictl
	I0719 05:36:14.835704    2708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 05:36:14.892288    2708 command_runner.go:130] > Version:  0.1.0
	I0719 05:36:14.892288    2708 command_runner.go:130] > RuntimeName:  docker
	I0719 05:36:14.892288    2708 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0719 05:36:14.892288    2708 command_runner.go:130] > RuntimeApiVersion:  v1
	I0719 05:36:14.894998    2708 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 05:36:14.904103    2708 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 05:36:14.941245    2708 command_runner.go:130] > 27.0.3
	I0719 05:36:14.950851    2708 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 05:36:14.982181    2708 command_runner.go:130] > 27.0.3
	I0719 05:36:14.986414    2708 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0719 05:36:14.995207    2708 out.go:177]   - env NO_PROXY=172.28.162.16
	I0719 05:36:14.997248    2708 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0719 05:36:15.001276    2708 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0719 05:36:15.001828    2708 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0719 05:36:15.001828    2708 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0719 05:36:15.001828    2708 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:e9:18 Flags:up|broadcast|multicast|running}
	I0719 05:36:15.006352    2708 ip.go:210] interface addr: fe80::1dc5:162d:cec2:b9bd/64
	I0719 05:36:15.006352    2708 ip.go:210] interface addr: 172.28.160.1/20
	I0719 05:36:15.020385    2708 ssh_runner.go:195] Run: grep 172.28.160.1	host.minikube.internal$ /etc/hosts
	I0719 05:36:15.027097    2708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 05:36:15.052333    2708 mustload.go:65] Loading cluster: multinode-761300
	I0719 05:36:15.052518    2708 config.go:182] Loaded profile config "multinode-761300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:36:15.053679    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:36:17.235752    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:36:17.236199    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:36:17.236199    2708 host.go:66] Checking if "multinode-761300" exists ...
	I0719 05:36:17.237010    2708 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300 for IP: 172.28.167.151
	I0719 05:36:17.237010    2708 certs.go:194] generating shared ca certs ...
	I0719 05:36:17.237010    2708 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:36:17.237700    2708 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0719 05:36:17.237797    2708 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0719 05:36:17.238348    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 05:36:17.238542    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0719 05:36:17.239116    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 05:36:17.239288    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 05:36:17.239916    2708 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem (1338 bytes)
	W0719 05:36:17.240197    2708 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604_empty.pem, impossibly tiny 0 bytes
	I0719 05:36:17.240419    2708 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0719 05:36:17.241282    2708 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0719 05:36:17.241658    2708 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0719 05:36:17.242031    2708 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0719 05:36:17.242297    2708 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem (1708 bytes)
	I0719 05:36:17.242833    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /usr/share/ca-certificates/96042.pem
	I0719 05:36:17.243195    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:36:17.243246    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem -> /usr/share/ca-certificates/9604.pem
	I0719 05:36:17.243805    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 05:36:17.306550    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 05:36:17.376547    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 05:36:17.423948    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 05:36:17.471910    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /usr/share/ca-certificates/96042.pem (1708 bytes)
	I0719 05:36:17.519204    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 05:36:17.565042    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem --> /usr/share/ca-certificates/9604.pem (1338 bytes)
	I0719 05:36:17.626045    2708 ssh_runner.go:195] Run: openssl version
	I0719 05:36:17.638735    2708 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0719 05:36:17.649703    2708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96042.pem && ln -fs /usr/share/ca-certificates/96042.pem /etc/ssl/certs/96042.pem"
	I0719 05:36:17.683222    2708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96042.pem
	I0719 05:36:17.691912    2708 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 19 03:46 /usr/share/ca-certificates/96042.pem
	I0719 05:36:17.691965    2708 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:46 /usr/share/ca-certificates/96042.pem
	I0719 05:36:17.703211    2708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96042.pem
	I0719 05:36:17.711397    2708 command_runner.go:130] > 3ec20f2e
	I0719 05:36:17.722433    2708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96042.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 05:36:17.753637    2708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 05:36:17.784363    2708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:36:17.792097    2708 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 19 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:36:17.792097    2708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:36:17.804827    2708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:36:17.813801    2708 command_runner.go:130] > b5213941
	I0719 05:36:17.825673    2708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 05:36:17.859908    2708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9604.pem && ln -fs /usr/share/ca-certificates/9604.pem /etc/ssl/certs/9604.pem"
	I0719 05:36:17.892133    2708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9604.pem
	I0719 05:36:17.899602    2708 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 19 03:46 /usr/share/ca-certificates/9604.pem
	I0719 05:36:17.899602    2708 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:46 /usr/share/ca-certificates/9604.pem
	I0719 05:36:17.910442    2708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9604.pem
	I0719 05:36:17.918672    2708 command_runner.go:130] > 51391683
	I0719 05:36:17.931311    2708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9604.pem /etc/ssl/certs/51391683.0"
	I0719 05:36:17.960658    2708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 05:36:17.967868    2708 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 05:36:17.968199    2708 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0719 05:36:17.968278    2708 kubeadm.go:934] updating node {m02 172.28.167.151 8443 v1.30.3 docker false true} ...
	I0719 05:36:17.968278    2708 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-761300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.167.151
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-761300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 05:36:17.982450    2708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 05:36:18.002863    2708 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	I0719 05:36:18.003798    2708 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0719 05:36:18.014265    2708 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0719 05:36:18.032638    2708 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0719 05:36:18.032638    2708 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0719 05:36:18.032638    2708 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0719 05:36:18.032638    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 05:36:18.032638    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 05:36:18.047816    2708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0719 05:36:18.049404    2708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 05:36:18.049926    2708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0719 05:36:18.056487    2708 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0719 05:36:18.056487    2708 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0719 05:36:18.056487    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0719 05:36:18.103043    2708 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0719 05:36:18.103043    2708 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0719 05:36:18.103043    2708 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 05:36:18.103043    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0719 05:36:18.118156    2708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0719 05:36:18.171342    2708 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0719 05:36:18.171342    2708 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0719 05:36:18.171342    2708 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0719 05:36:19.291473    2708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0719 05:36:19.309923    2708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0719 05:36:19.343542    2708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 05:36:19.390667    2708 ssh_runner.go:195] Run: grep 172.28.162.16	control-plane.minikube.internal$ /etc/hosts
	I0719 05:36:19.397542    2708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.162.16	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 05:36:19.429250    2708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:36:19.640497    2708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 05:36:19.670327    2708 host.go:66] Checking if "multinode-761300" exists ...
	I0719 05:36:19.671289    2708 start.go:317] joinCluster: &{Name:multinode-761300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:multinode-761300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.162.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.167.151 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:36:19.671289    2708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0719 05:36:19.671289    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:36:21.970664    2708 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:36:21.970664    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:36:21.970753    2708 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:36:24.605650    2708 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:36:24.605650    2708 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:36:24.606424    2708 sshutil.go:53] new ssh client: &{IP:172.28.162.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\id_rsa Username:docker}
	I0719 05:36:24.808536    2708 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token h1inb4.celhrgkmp6v3vfdm --discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 
	I0719 05:36:24.809522    2708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0": (5.138172s)
	I0719 05:36:24.809522    2708 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.28.167.151 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0719 05:36:24.809522    2708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token h1inb4.celhrgkmp6v3vfdm --discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-761300-m02"
	I0719 05:36:25.034394    2708 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 05:36:26.363717    2708 command_runner.go:130] > [preflight] Running pre-flight checks
	I0719 05:36:26.363717    2708 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0719 05:36:26.363717    2708 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0719 05:36:26.363717    2708 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 05:36:26.363717    2708 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 05:36:26.363717    2708 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0719 05:36:26.363717    2708 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0719 05:36:26.363717    2708 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.860801ms
	I0719 05:36:26.363717    2708 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0719 05:36:26.363717    2708 command_runner.go:130] > This node has joined the cluster:
	I0719 05:36:26.363717    2708 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0719 05:36:26.363717    2708 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0719 05:36:26.363717    2708 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0719 05:36:26.363943    2708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token h1inb4.celhrgkmp6v3vfdm --discovery-token-ca-cert-hash sha256:01733c582aa24c4b77f8fbc968312dd622883c954bdd673cc06ac42db0517091 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-761300-m02": (1.5544019s)
	I0719 05:36:26.364122    2708 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0719 05:36:26.765725    2708 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0719 05:36:26.778539    2708 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-761300-m02 minikube.k8s.io/updated_at=2024_07_19T05_36_26_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db minikube.k8s.io/name=multinode-761300 minikube.k8s.io/primary=false
	I0719 05:36:26.903077    2708 command_runner.go:130] > node/multinode-761300-m02 labeled
	I0719 05:36:26.903237    2708 start.go:319] duration metric: took 7.231861s to joinCluster
	I0719 05:36:26.903440    2708 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.28.167.151 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0719 05:36:26.904823    2708 config.go:182] Loaded profile config "multinode-761300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:36:26.907129    2708 out.go:177] * Verifying Kubernetes components...
	I0719 05:36:26.922319    2708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:36:27.125999    2708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 05:36:27.153205    2708 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 05:36:27.154090    2708 kapi.go:59] client config for multinode-761300: &rest.Config{Host:"https://172.28.162.16:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-761300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-761300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 05:36:27.154958    2708 node_ready.go:35] waiting up to 6m0s for node "multinode-761300-m02" to be "Ready" ...
	I0719 05:36:27.155340    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:27.155340    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:27.155340    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:27.155340    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:27.172797    2708 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0719 05:36:27.172859    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:27.172927    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:27.172927    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:27.173024    2708 round_trippers.go:580]     Content-Length: 3921
	I0719 05:36:27.173024    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:27 GMT
	I0719 05:36:27.173024    2708 round_trippers.go:580]     Audit-Id: 4afb97bc-50f6-4586-a686-7fd505803627
	I0719 05:36:27.173024    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:27.173024    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:27.173024    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"580","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0719 05:36:27.660115    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:27.660115    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:27.660115    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:27.660115    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:27.664616    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:36:27.664899    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:27.664899    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:27.664899    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:27.664899    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:27.664899    2708 round_trippers.go:580]     Content-Length: 3921
	I0719 05:36:27.664899    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:27 GMT
	I0719 05:36:27.664899    2708 round_trippers.go:580]     Audit-Id: aa96e006-3ab5-4830-881c-2a9dd6f42c4e
	I0719 05:36:27.664899    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:27.665178    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"580","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0719 05:36:28.159926    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:28.160197    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:28.160197    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:28.160197    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:28.164563    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:36:28.164563    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:28.164563    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:28.164563    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:28.164563    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:28.164563    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:28.164957    2708 round_trippers.go:580]     Content-Length: 3921
	I0719 05:36:28.164957    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:28 GMT
	I0719 05:36:28.164957    2708 round_trippers.go:580]     Audit-Id: f181f6c2-69e6-4953-8c8d-fe225c24585f
	I0719 05:36:28.165096    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"580","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0719 05:36:28.661063    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:28.661063    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:28.661063    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:28.661063    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:28.664366    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:28.664366    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:28.664998    2708 round_trippers.go:580]     Audit-Id: 5af1e09f-86fe-4365-8016-09aff1c95656
	I0719 05:36:28.664998    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:28.664998    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:28.664998    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:28.664998    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:28.664998    2708 round_trippers.go:580]     Content-Length: 3921
	I0719 05:36:28.664998    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:28 GMT
	I0719 05:36:28.665278    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"580","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0719 05:36:29.158132    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:29.158132    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:29.158340    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:29.158340    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:29.162115    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:29.162974    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:29.162974    2708 round_trippers.go:580]     Audit-Id: 357940e4-2337-40cd-89b3-981bec4a9235
	I0719 05:36:29.162974    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:29.162974    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:29.162974    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:29.162974    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:29.162974    2708 round_trippers.go:580]     Content-Length: 3921
	I0719 05:36:29.162974    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:29 GMT
	I0719 05:36:29.163151    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"580","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0719 05:36:29.163658    2708 node_ready.go:53] node "multinode-761300-m02" has status "Ready":"False"
	I0719 05:36:29.656741    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:29.656869    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:29.656869    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:29.656869    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:29.660035    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:29.660799    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:29.660799    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:29.660799    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:29.660799    2708 round_trippers.go:580]     Content-Length: 3921
	I0719 05:36:29.660799    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:29 GMT
	I0719 05:36:29.660799    2708 round_trippers.go:580]     Audit-Id: 00d9b15a-249d-4d2d-bb3c-07f7bd5b6b09
	I0719 05:36:29.660799    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:29.660799    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:29.661079    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"580","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0719 05:36:30.155801    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:30.155885    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:30.155885    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:30.155885    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:30.159156    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:30.159156    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:30.159156    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:30.159156    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:30.159156    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:30.159156    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:30.160165    2708 round_trippers.go:580]     Content-Length: 4030
	I0719 05:36:30.160165    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:30 GMT
	I0719 05:36:30.160192    2708 round_trippers.go:580]     Audit-Id: 02d5cf2a-6c00-4b95-8a4c-95af2d4ea2f1
	I0719 05:36:30.160304    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"586","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0719 05:36:30.669049    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:30.669049    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:30.669132    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:30.669132    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:30.673103    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:30.673558    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:30.673558    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:30 GMT
	I0719 05:36:30.673558    2708 round_trippers.go:580]     Audit-Id: 8a7ef3b2-fe56-4c73-b1cc-f4d5f4a482ae
	I0719 05:36:30.673558    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:30.673558    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:30.673558    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:30.673558    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:30.673558    2708 round_trippers.go:580]     Content-Length: 4030
	I0719 05:36:30.673743    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"586","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0719 05:36:31.166894    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:31.166894    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:31.166894    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:31.166999    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:31.198615    2708 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0719 05:36:31.198748    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:31.198748    2708 round_trippers.go:580]     Content-Length: 4030
	I0719 05:36:31.198748    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:31 GMT
	I0719 05:36:31.198748    2708 round_trippers.go:580]     Audit-Id: 51276300-c8c4-49cb-a13f-381d95a1d5aa
	I0719 05:36:31.198748    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:31.198748    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:31.198748    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:31.198748    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:31.199006    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"586","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0719 05:36:31.199091    2708 node_ready.go:53] node "multinode-761300-m02" has status "Ready":"False"
	I0719 05:36:31.656136    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:31.656397    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:31.656462    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:31.656462    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:31.660252    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:31.660905    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:31.660978    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:31.660978    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:31.660978    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:31.660978    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:31.660978    2708 round_trippers.go:580]     Content-Length: 4030
	I0719 05:36:31.660978    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:31 GMT
	I0719 05:36:31.660978    2708 round_trippers.go:580]     Audit-Id: d2f6cb49-e361-46b9-b367-c295ab4073b2
	I0719 05:36:31.660978    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"586","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0719 05:36:32.159727    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:32.159804    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:32.159804    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:32.159804    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:32.162221    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:36:32.163256    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:32.163256    2708 round_trippers.go:580]     Content-Length: 4030
	I0719 05:36:32.163256    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:32 GMT
	I0719 05:36:32.163256    2708 round_trippers.go:580]     Audit-Id: 030382bd-2916-4031-bb45-20d95295f431
	I0719 05:36:32.163256    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:32.163256    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:32.163256    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:32.163256    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:32.163256    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"586","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0719 05:36:32.660724    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:32.660724    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:32.660724    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:32.660724    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:32.664783    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:36:32.664783    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:32.664783    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:32.664783    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:32.664783    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:32.664783    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:32.664783    2708 round_trippers.go:580]     Content-Length: 4030
	I0719 05:36:32.665208    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:32 GMT
	I0719 05:36:32.665208    2708 round_trippers.go:580]     Audit-Id: e1eace62-b463-40c4-ae08-5439795c4306
	I0719 05:36:32.665496    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"586","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0719 05:36:33.163115    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:33.163115    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:33.163115    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:33.163115    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:33.167643    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:36:33.167712    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:33.167712    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:33.167712    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:33.167712    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:33.167712    2708 round_trippers.go:580]     Content-Length: 4030
	I0719 05:36:33.167712    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:33 GMT
	I0719 05:36:33.167712    2708 round_trippers.go:580]     Audit-Id: b8565571-0e01-41a0-a397-0669ee017393
	I0719 05:36:33.167786    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:33.167786    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"586","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0719 05:36:33.669078    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:33.669078    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:33.669326    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:33.669326    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:33.672936    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:33.672936    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:33.672936    2708 round_trippers.go:580]     Audit-Id: 6edeb477-cdf8-41f5-852a-ec32f8e1ebb8
	I0719 05:36:33.672936    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:33.672936    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:33.672936    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:33.672936    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:33.672936    2708 round_trippers.go:580]     Content-Length: 4030
	I0719 05:36:33.672936    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:33 GMT
	I0719 05:36:33.673852    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"586","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0719 05:36:33.674260    2708 node_ready.go:53] node "multinode-761300-m02" has status "Ready":"False"
	I0719 05:36:34.162827    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:34.162827    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:34.162827    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:34.162827    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:34.169914    2708 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 05:36:34.169979    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:34.169979    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:34.169979    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:34.169979    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:34.169979    2708 round_trippers.go:580]     Content-Length: 4030
	I0719 05:36:34.169979    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:34 GMT
	I0719 05:36:34.169979    2708 round_trippers.go:580]     Audit-Id: 55ee9abe-0d14-41fa-9527-6a5e804ba83c
	I0719 05:36:34.170058    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:34.170135    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"586","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0719 05:36:34.667789    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:34.667789    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:34.667789    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:34.667789    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:34.670885    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:34.671832    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:34.671832    2708 round_trippers.go:580]     Audit-Id: 30eb2969-3cc8-47a0-8522-71d4fd915bf6
	I0719 05:36:34.671832    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:34.671894    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:34.671894    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:34.671894    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:34.671894    2708 round_trippers.go:580]     Content-Length: 4030
	I0719 05:36:34.671894    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:34 GMT
	I0719 05:36:34.671894    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"586","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0719 05:36:35.157890    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:35.157890    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:35.157890    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:35.157890    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:35.164113    2708 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 05:36:35.164827    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:35.164827    2708 round_trippers.go:580]     Content-Length: 4030
	I0719 05:36:35.164827    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:35 GMT
	I0719 05:36:35.164827    2708 round_trippers.go:580]     Audit-Id: c3de77b4-0151-4306-9596-725455fc4122
	I0719 05:36:35.164827    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:35.164827    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:35.164827    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:35.164827    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:35.164827    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"586","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0719 05:36:35.664739    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:35.664842    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:35.664842    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:35.664842    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:35.668750    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:35.668750    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:35.668750    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:35.668750    2708 round_trippers.go:580]     Content-Length: 4030
	I0719 05:36:35.668750    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:35 GMT
	I0719 05:36:35.668750    2708 round_trippers.go:580]     Audit-Id: 3e9c6593-b01e-433e-8ffc-4a98fe1b18f7
	I0719 05:36:35.668750    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:35.668750    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:35.668750    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:35.668750    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"586","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3006 chars]
	I0719 05:36:36.155879    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:36.156047    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:36.156047    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:36.156105    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:36.159710    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:36.159710    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:36.159710    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:36.159710    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:36.159710    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:36 GMT
	I0719 05:36:36.159710    2708 round_trippers.go:580]     Audit-Id: bdb38e18-6ff3-45a6-aec5-d8cecf8fe649
	I0719 05:36:36.159710    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:36.159710    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:36.159710    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:36.160700    2708 node_ready.go:53] node "multinode-761300-m02" has status "Ready":"False"
	I0719 05:36:36.661756    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:36.661981    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:36.661981    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:36.662070    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:37.041294    2708 round_trippers.go:574] Response Status: 200 OK in 379 milliseconds
	I0719 05:36:37.041639    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:37.041690    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:37.041690    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:37.041690    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:37.041690    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:37.041690    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:37 GMT
	I0719 05:36:37.041690    2708 round_trippers.go:580]     Audit-Id: c42fa560-d7d9-42be-904b-95bdc14b92ae
	I0719 05:36:37.041924    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:37.160085    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:37.160085    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:37.160085    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:37.160085    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:37.194904    2708 round_trippers.go:574] Response Status: 200 OK in 34 milliseconds
	I0719 05:36:37.194904    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:37.194904    2708 round_trippers.go:580]     Audit-Id: 6b79b607-8c0d-4b43-9e8e-454d8f4c491a
	I0719 05:36:37.194904    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:37.195730    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:37.195730    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:37.195730    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:37.195730    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:37 GMT
	I0719 05:36:37.196113    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:37.662050    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:37.662050    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:37.662129    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:37.662129    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:37.666669    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:36:37.666669    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:37.666669    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:37 GMT
	I0719 05:36:37.666669    2708 round_trippers.go:580]     Audit-Id: 1d7ae528-76cb-49ce-9abb-cceffe77eb52
	I0719 05:36:37.666669    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:37.666669    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:37.666669    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:37.666669    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:37.666669    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:38.168559    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:38.168559    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:38.168559    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:38.168559    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:38.182431    2708 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0719 05:36:38.182863    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:38.182863    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:38.182863    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:38.182863    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:38.182863    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:38.182936    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:38 GMT
	I0719 05:36:38.182936    2708 round_trippers.go:580]     Audit-Id: 74870354-c0a7-4d9d-afc2-cd57bf1852b2
	I0719 05:36:38.183259    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:38.183702    2708 node_ready.go:53] node "multinode-761300-m02" has status "Ready":"False"
	I0719 05:36:38.665467    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:38.665556    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:38.665556    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:38.665556    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:38.670212    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:36:38.670212    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:38.670212    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:38.670212    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:38.670212    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:38 GMT
	I0719 05:36:38.670212    2708 round_trippers.go:580]     Audit-Id: 5e5dad5c-0215-4882-a3fd-8b0da997870e
	I0719 05:36:38.670212    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:38.670212    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:38.670212    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:39.170752    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:39.170831    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:39.170831    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:39.170831    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:39.178186    2708 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0719 05:36:39.178706    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:39.178706    2708 round_trippers.go:580]     Audit-Id: 23406956-725c-4138-892e-ac0d6de47287
	I0719 05:36:39.178706    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:39.178706    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:39.178706    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:39.178706    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:39.178706    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:39 GMT
	I0719 05:36:39.178706    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:39.661905    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:39.661970    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:39.661970    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:39.662038    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:39.665240    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:39.665240    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:39.665626    2708 round_trippers.go:580]     Audit-Id: ea82b10e-0718-4768-8e47-268c62cad30a
	I0719 05:36:39.665626    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:39.665626    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:39.665626    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:39.665626    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:39.665626    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:39 GMT
	I0719 05:36:39.665626    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:40.162276    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:40.162276    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:40.162276    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:40.162276    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:40.165845    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:40.166328    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:40.166328    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:40.166328    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:40.166328    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:40 GMT
	I0719 05:36:40.166398    2708 round_trippers.go:580]     Audit-Id: 02a977ab-afec-434c-b467-8946b463f73f
	I0719 05:36:40.166398    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:40.166398    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:40.166660    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:40.670165    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:40.670165    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:40.670165    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:40.670165    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:40.674159    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:40.674159    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:40.674159    2708 round_trippers.go:580]     Audit-Id: ecbceda2-1d27-4342-aa5f-20d2051c5f30
	I0719 05:36:40.674159    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:40.674159    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:40.674159    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:40.674159    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:40.674287    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:40 GMT
	I0719 05:36:40.674516    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:40.674577    2708 node_ready.go:53] node "multinode-761300-m02" has status "Ready":"False"
	I0719 05:36:41.162021    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:41.162021    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:41.162021    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:41.162021    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:41.166125    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:36:41.166242    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:41.166319    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:41.166319    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:41.166319    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:41.166319    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:41 GMT
	I0719 05:36:41.166319    2708 round_trippers.go:580]     Audit-Id: b48a768b-2861-43f7-afff-6881015fcfe2
	I0719 05:36:41.166319    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:41.166319    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:41.662008    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:41.662196    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:41.662196    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:41.662271    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:41.666395    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:36:41.666395    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:41.666489    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:41.666489    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:41.666489    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:41.666489    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:41 GMT
	I0719 05:36:41.666489    2708 round_trippers.go:580]     Audit-Id: 8b8d68ab-5125-497c-939a-bae08c3c486b
	I0719 05:36:41.666489    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:41.666778    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:42.162466    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:42.162466    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:42.162806    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:42.162806    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:42.176187    2708 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0719 05:36:42.176834    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:42.176834    2708 round_trippers.go:580]     Audit-Id: 7822bb0a-b192-4019-9011-48cad0166ac5
	I0719 05:36:42.176834    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:42.176834    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:42.176834    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:42.176834    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:42.176834    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:42 GMT
	I0719 05:36:42.177312    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:42.667532    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:42.667811    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:42.667811    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:42.667811    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:42.671258    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:42.671258    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:42.671258    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:42 GMT
	I0719 05:36:42.671258    2708 round_trippers.go:580]     Audit-Id: 11d2eaed-dce0-4f7d-84f3-7255c2409b1d
	I0719 05:36:42.671258    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:42.671258    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:42.671258    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:42.671258    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:42.672949    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:43.155888    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:43.155888    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:43.155888    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:43.155888    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:43.159890    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:43.159890    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:43.159957    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:43.159957    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:43 GMT
	I0719 05:36:43.159957    2708 round_trippers.go:580]     Audit-Id: c1f60223-cd17-4e40-80c3-54f5b5995fbc
	I0719 05:36:43.159957    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:43.159957    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:43.159957    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:43.160204    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:43.160596    2708 node_ready.go:53] node "multinode-761300-m02" has status "Ready":"False"
	I0719 05:36:43.662560    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:43.662560    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:43.662560    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:43.662560    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:43.666209    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:43.666209    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:43.666209    2708 round_trippers.go:580]     Audit-Id: 9c93374d-7d4a-46da-9b09-1437989fd5b4
	I0719 05:36:43.666209    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:43.666209    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:43.666209    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:43.666209    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:43.666209    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:43 GMT
	I0719 05:36:43.666847    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:44.163825    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:44.163893    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:44.163893    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:44.163893    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:44.167541    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:44.168241    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:44.168241    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:44.168376    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:44.168376    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:44 GMT
	I0719 05:36:44.168376    2708 round_trippers.go:580]     Audit-Id: 01a7a37a-71b6-43f6-997b-8fd16cb0401b
	I0719 05:36:44.168376    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:44.168376    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:44.168650    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:44.668731    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:44.668956    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:44.668956    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:44.668956    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:44.672110    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:44.672110    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:44.672110    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:44.672110    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:44 GMT
	I0719 05:36:44.672110    2708 round_trippers.go:580]     Audit-Id: 43fd66bc-d212-46de-a771-dc4a0c37dcbd
	I0719 05:36:44.672110    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:44.672110    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:44.672110    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:44.672865    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:45.155802    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:45.156053    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:45.156053    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:45.156053    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:45.159478    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:45.160441    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:45.160441    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:45.160441    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:45.160441    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:45.160441    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:45.160441    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:45 GMT
	I0719 05:36:45.160441    2708 round_trippers.go:580]     Audit-Id: f7275a59-96ff-4fe5-aab7-6cb2fe7886b7
	I0719 05:36:45.160759    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:45.161107    2708 node_ready.go:53] node "multinode-761300-m02" has status "Ready":"False"
	I0719 05:36:45.657107    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:45.657529    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:45.657529    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:45.657529    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:45.660585    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:45.660636    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:45.660636    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:45.660636    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:45.660636    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:45.660636    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:45.660636    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:45 GMT
	I0719 05:36:45.660636    2708 round_trippers.go:580]     Audit-Id: 4c036ad4-6767-49ee-83d7-15df3c9cba53
	I0719 05:36:45.661067    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:46.159009    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:46.159009    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:46.159259    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:46.159259    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:46.162603    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:46.162603    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:46.162603    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:46 GMT
	I0719 05:36:46.163020    2708 round_trippers.go:580]     Audit-Id: 30dbab7e-2748-4e59-83f3-f5c287469e8c
	I0719 05:36:46.163020    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:46.163020    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:46.163020    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:46.163020    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:46.163654    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:46.662410    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:46.662569    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:46.662569    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:46.662569    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:46.668036    2708 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:36:46.668036    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:46.668036    2708 round_trippers.go:580]     Audit-Id: bb68224e-6f31-4830-bb89-5923690e0111
	I0719 05:36:46.668036    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:46.668118    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:46.668118    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:46.668118    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:46.668118    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:46 GMT
	I0719 05:36:46.668327    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:47.161875    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:47.161875    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:47.161875    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:47.161875    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:47.166311    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:47.166311    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:47.166311    2708 round_trippers.go:580]     Audit-Id: a2480668-d710-4d90-b71f-b540dc41f823
	I0719 05:36:47.166375    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:47.166375    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:47.166375    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:47.166375    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:47.166375    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:47 GMT
	I0719 05:36:47.166746    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:47.167268    2708 node_ready.go:53] node "multinode-761300-m02" has status "Ready":"False"
	I0719 05:36:47.663989    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:47.664069    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:47.664069    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:47.664069    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:47.666482    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:36:47.666482    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:47.666482    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:47.666482    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:47.667533    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:47 GMT
	I0719 05:36:47.667533    2708 round_trippers.go:580]     Audit-Id: 3e097bb0-b906-47b0-8ddc-4a1fbb95d03f
	I0719 05:36:47.667554    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:47.667554    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:47.668239    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:48.164184    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:48.164389    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:48.164481    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:48.164481    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:48.167004    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:36:48.167866    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:48.167866    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:48 GMT
	I0719 05:36:48.167866    2708 round_trippers.go:580]     Audit-Id: a7ea20a5-3a51-4cc1-8f4d-224863168c69
	I0719 05:36:48.167866    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:48.167866    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:48.167866    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:48.167866    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:48.168193    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:48.663666    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:48.663937    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:48.663937    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:48.664036    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:48.667868    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:48.667868    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:48.667868    2708 round_trippers.go:580]     Audit-Id: 966490e1-830d-4686-ad9c-b68058be82a8
	I0719 05:36:48.667868    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:48.667868    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:48.667868    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:48.667868    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:48.667868    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:48 GMT
	I0719 05:36:48.669448    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:49.163174    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:49.163174    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:49.163174    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:49.163174    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:49.166848    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:49.166848    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:49.167585    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:49.167585    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:49 GMT
	I0719 05:36:49.167585    2708 round_trippers.go:580]     Audit-Id: 3da010bc-4d26-4680-bbb0-0631389895c5
	I0719 05:36:49.167585    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:49.167585    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:49.167585    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:49.167854    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:49.168440    2708 node_ready.go:53] node "multinode-761300-m02" has status "Ready":"False"
	I0719 05:36:49.668060    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:49.668060    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:49.668060    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:49.668060    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:49.672694    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:36:49.672694    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:49.672694    2708 round_trippers.go:580]     Audit-Id: b79e4992-b643-4d44-8f44-5efa2fdb2d59
	I0719 05:36:49.672694    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:49.672694    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:49.672694    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:49.672694    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:49.672694    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:49 GMT
	I0719 05:36:49.673187    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:50.169856    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:50.169856    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:50.169856    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:50.169856    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:50.173692    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:50.174615    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:50.174615    2708 round_trippers.go:580]     Audit-Id: 2096d99f-af16-4dc1-8ec0-6272807016c9
	I0719 05:36:50.174615    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:50.174615    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:50.174615    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:50.174615    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:50.174615    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:50 GMT
	I0719 05:36:50.174876    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:50.669957    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:50.670061    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:50.670061    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:50.670143    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:50.673555    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:50.673555    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:50.673555    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:50.673555    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:50.673770    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:50.673770    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:50.673770    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:50 GMT
	I0719 05:36:50.673770    2708 round_trippers.go:580]     Audit-Id: 44670a51-9cf7-452f-8092-2018e3e9f140
	I0719 05:36:50.673990    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:51.169122    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:51.169122    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:51.169362    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:51.169362    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:51.172981    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:51.172981    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:51.172981    2708 round_trippers.go:580]     Audit-Id: 56f115f4-5d76-4388-8f7b-6601a4d9a575
	I0719 05:36:51.172981    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:51.172981    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:51.172981    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:51.172981    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:51.172981    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:51 GMT
	I0719 05:36:51.174151    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:51.174573    2708 node_ready.go:53] node "multinode-761300-m02" has status "Ready":"False"
	I0719 05:36:51.667862    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:51.667862    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:51.667862    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:51.667862    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:51.671012    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:51.671012    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:51.671012    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:51.671012    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:51.671012    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:51.671012    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:51.671012    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:51 GMT
	I0719 05:36:51.671012    2708 round_trippers.go:580]     Audit-Id: 3c26b7d7-6c25-41d7-bf85-f28f1b2ca278
	I0719 05:36:51.672181    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:52.170798    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:52.170798    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:52.170798    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:52.170798    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:52.177300    2708 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 05:36:52.177300    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:52.177300    2708 round_trippers.go:580]     Audit-Id: 8ffe9e38-5474-43ca-9adb-e07768dd2c1b
	I0719 05:36:52.177300    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:52.177300    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:52.177300    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:52.177300    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:52.177300    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:52 GMT
	I0719 05:36:52.178040    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:52.670557    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:52.670557    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:52.670851    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:52.670851    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:52.676748    2708 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:36:52.676748    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:52.676748    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:52.676748    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:52 GMT
	I0719 05:36:52.676748    2708 round_trippers.go:580]     Audit-Id: 1ba38c09-d768-40bf-9e7d-a45a7aa97be4
	I0719 05:36:52.676748    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:52.676748    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:52.676748    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:52.676748    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:53.170241    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:53.170511    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:53.170511    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:53.170595    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:53.174407    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:53.174407    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:53.174407    2708 round_trippers.go:580]     Audit-Id: 0a619e4b-4afa-4c36-b9dc-be1c104d84f9
	I0719 05:36:53.174657    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:53.174657    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:53.174657    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:53.174657    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:53.174657    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:53 GMT
	I0719 05:36:53.174904    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:53.175441    2708 node_ready.go:53] node "multinode-761300-m02" has status "Ready":"False"
	I0719 05:36:53.671105    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:53.671105    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:53.671184    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:53.671184    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:53.674446    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:53.674706    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:53.674706    2708 round_trippers.go:580]     Audit-Id: de4cbdfc-b079-4500-8338-2d6ad03e4f9c
	I0719 05:36:53.674706    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:53.674706    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:53.674706    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:53.674706    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:53.674706    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:53 GMT
	I0719 05:36:53.675114    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:54.168168    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:54.168235    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:54.168235    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:54.168235    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:54.180253    2708 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0719 05:36:54.180655    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:54.180655    2708 round_trippers.go:580]     Audit-Id: 70d38965-e806-4e0d-9917-39df2e587536
	I0719 05:36:54.180655    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:54.180655    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:54.180732    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:54.180732    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:54.180732    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:54 GMT
	I0719 05:36:54.180964    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:54.666179    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:54.666534    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:54.666534    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:54.666534    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:54.670023    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:54.670852    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:54.670852    2708 round_trippers.go:580]     Audit-Id: 7331401b-5d0f-40c9-b0ec-8c14cfe35c16
	I0719 05:36:54.670852    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:54.670852    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:54.670852    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:54.670852    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:54.670852    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:54 GMT
	I0719 05:36:54.670852    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"594","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3398 chars]
	I0719 05:36:55.163880    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:55.164145    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:55.164145    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:55.164145    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:55.168351    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:36:55.168498    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:55.168498    2708 round_trippers.go:580]     Audit-Id: 82cf42c0-f095-425a-9b37-a377c2e1d177
	I0719 05:36:55.168527    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:55.168527    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:55.168558    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:55.168558    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:55.168558    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:55 GMT
	I0719 05:36:55.168585    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"626","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3144 chars]
	I0719 05:36:55.169120    2708 node_ready.go:49] node "multinode-761300-m02" has status "Ready":"True"
	I0719 05:36:55.169120    2708 node_ready.go:38] duration metric: took 28.0138268s for node "multinode-761300-m02" to be "Ready" ...
	I0719 05:36:55.169120    2708 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:36:55.169419    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods
	I0719 05:36:55.169495    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:55.169495    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:55.169548    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:55.175447    2708 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:36:55.175447    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:55.175447    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:55.175447    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:55 GMT
	I0719 05:36:55.175447    2708 round_trippers.go:580]     Audit-Id: 7356e524-bb48-4f05-b218-1865a8d54ca9
	I0719 05:36:55.175447    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:55.175447    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:55.175447    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:55.177959    2708 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"626"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"413","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70438 chars]
	I0719 05:36:55.181089    2708 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace to be "Ready" ...
	I0719 05:36:55.181216    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:36:55.181216    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:55.181374    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:55.181374    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:55.183630    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:36:55.183630    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:55.183630    2708 round_trippers.go:580]     Audit-Id: 3cc09513-0d39-421c-a528-342e1ed73f0c
	I0719 05:36:55.183630    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:55.183630    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:55.183630    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:55.183630    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:55.183630    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:55 GMT
	I0719 05:36:55.185127    2708 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"413","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0719 05:36:55.185839    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:36:55.185891    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:55.185891    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:55.185891    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:55.188622    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:36:55.188622    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:55.188622    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:55.188622    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:55.188622    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:55 GMT
	I0719 05:36:55.188622    2708 round_trippers.go:580]     Audit-Id: 6b057702-87b1-4aed-b9ec-56d91d9470a6
	I0719 05:36:55.188622    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:55.189566    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:55.190100    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"394","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0719 05:36:55.190575    2708 pod_ready.go:92] pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace has status "Ready":"True"
	I0719 05:36:55.190638    2708 pod_ready.go:81] duration metric: took 9.549ms for pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace to be "Ready" ...
	I0719 05:36:55.190666    2708 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:36:55.190742    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-761300
	I0719 05:36:55.190767    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:55.190767    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:55.190767    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:55.193562    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:36:55.194236    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:55.194236    2708 round_trippers.go:580]     Audit-Id: 0fadde8a-dbf5-483b-9337-64d5231e9059
	I0719 05:36:55.194236    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:55.194236    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:55.194236    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:55.194236    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:55.194236    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:55 GMT
	I0719 05:36:55.194561    2708 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-761300","namespace":"kube-system","uid":"a2361ae1-fa19-4fed-9917-abc94c9107aa","resourceVersion":"285","creationTimestamp":"2024-07-19T05:32:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.162.16:2379","kubernetes.io/config.hash":"1436f6b96e809d6c17e4b090c15cf220","kubernetes.io/config.mirror":"1436f6b96e809d6c17e4b090c15cf220","kubernetes.io/config.seen":"2024-07-19T05:32:54.007753868Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:32:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0719 05:36:55.195079    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:36:55.195134    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:55.195134    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:55.195134    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:55.197556    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:36:55.197556    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:55.197556    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:55.197556    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:55.197556    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:55 GMT
	I0719 05:36:55.197556    2708 round_trippers.go:580]     Audit-Id: 41d74579-5240-452f-8c6f-23c20f8853e2
	I0719 05:36:55.197556    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:55.197556    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:55.197556    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"394","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0719 05:36:55.198560    2708 pod_ready.go:92] pod "etcd-multinode-761300" in "kube-system" namespace has status "Ready":"True"
	I0719 05:36:55.198560    2708 pod_ready.go:81] duration metric: took 7.8939ms for pod "etcd-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:36:55.198560    2708 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:36:55.198560    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-761300
	I0719 05:36:55.198560    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:55.198560    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:55.198560    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:55.200713    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:36:55.200713    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:55.200713    2708 round_trippers.go:580]     Audit-Id: bbd26307-ba7f-4134-be3e-3b863cfe92d9
	I0719 05:36:55.200713    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:55.200713    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:55.200713    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:55.200713    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:55.200713    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:55 GMT
	I0719 05:36:55.201882    2708 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-761300","namespace":"kube-system","uid":"36919164-4b0f-48b4-b71b-024def806c8d","resourceVersion":"281","creationTimestamp":"2024-07-19T05:33:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.162.16:8443","kubernetes.io/config.hash":"b3cb2b1621f72c668585d21689da850a","kubernetes.io/config.mirror":"b3cb2b1621f72c668585d21689da850a","kubernetes.io/config.seen":"2024-07-19T05:33:02.001206567Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0719 05:36:55.202081    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:36:55.202081    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:55.202081    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:55.202081    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:55.204777    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:36:55.204777    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:55.204777    2708 round_trippers.go:580]     Audit-Id: 667e44c4-5291-4ab8-9ae6-6e8d8c7b0c22
	I0719 05:36:55.204777    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:55.204777    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:55.204777    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:55.204777    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:55.204777    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:55 GMT
	I0719 05:36:55.205027    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"394","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0719 05:36:55.205467    2708 pod_ready.go:92] pod "kube-apiserver-multinode-761300" in "kube-system" namespace has status "Ready":"True"
	I0719 05:36:55.205467    2708 pod_ready.go:81] duration metric: took 6.907ms for pod "kube-apiserver-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:36:55.205467    2708 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:36:55.205467    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-761300
	I0719 05:36:55.205467    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:55.205467    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:55.205467    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:55.208175    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:36:55.208175    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:55.208810    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:55.208810    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:55 GMT
	I0719 05:36:55.208810    2708 round_trippers.go:580]     Audit-Id: 52eed514-9eb5-4a8f-8d1b-1b29a4ad3535
	I0719 05:36:55.208810    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:55.208810    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:55.208810    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:55.209272    2708 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-761300","namespace":"kube-system","uid":"2124834c-1961-49fb-8699-fba2fc5dd0ac","resourceVersion":"280","creationTimestamp":"2024-07-19T05:33:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"91d2984bea90586f6ba6d94e358920eb","kubernetes.io/config.mirror":"91d2984bea90586f6ba6d94e358920eb","kubernetes.io/config.seen":"2024-07-19T05:33:02.001207967Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0719 05:36:55.209850    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:36:55.209997    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:55.209997    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:55.209997    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:55.212961    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:36:55.212961    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:55.212961    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:55.212961    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:55.212961    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:55.212961    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:55 GMT
	I0719 05:36:55.213066    2708 round_trippers.go:580]     Audit-Id: 029fb0ac-ebc2-48eb-a8b9-73574c0bfa80
	I0719 05:36:55.213084    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:55.213194    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"394","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0719 05:36:55.213592    2708 pod_ready.go:92] pod "kube-controller-manager-multinode-761300" in "kube-system" namespace has status "Ready":"True"
	I0719 05:36:55.213668    2708 pod_ready.go:81] duration metric: took 8.2008ms for pod "kube-controller-manager-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:36:55.213668    2708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c4z7f" in "kube-system" namespace to be "Ready" ...
	I0719 05:36:55.366956    2708 request.go:629] Waited for 153.0619ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4z7f
	I0719 05:36:55.367104    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4z7f
	I0719 05:36:55.367343    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:55.367433    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:55.367608    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:55.372361    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:36:55.372439    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:55.372439    2708 round_trippers.go:580]     Audit-Id: 4202d938-bc5a-4bbd-acfb-7702f83d975a
	I0719 05:36:55.372439    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:55.372439    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:55.372439    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:55.372439    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:55.372439    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:55 GMT
	I0719 05:36:55.372941    2708 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c4z7f","generateName":"kube-proxy-","namespace":"kube-system","uid":"17ff8aac-2d57-44fb-a3ec-f0d6ea181881","resourceVersion":"368","creationTimestamp":"2024-07-19T05:33:15Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06c026b7-a7b7-4276-a86c-fc9c51f31e4e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06c026b7-a7b7-4276-a86c-fc9c51f31e4e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0719 05:36:55.568343    2708 request.go:629] Waited for 194.3857ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:36:55.568462    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:36:55.568518    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:55.568518    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:55.568568    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:55.572683    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:36:55.572786    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:55.572786    2708 round_trippers.go:580]     Audit-Id: f5d5f3e3-0c25-4ac2-bab7-08be0032159f
	I0719 05:36:55.572786    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:55.572786    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:55.572786    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:55.572786    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:55.572786    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:55 GMT
	I0719 05:36:55.573142    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"394","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0719 05:36:55.573373    2708 pod_ready.go:92] pod "kube-proxy-c4z7f" in "kube-system" namespace has status "Ready":"True"
	I0719 05:36:55.573373    2708 pod_ready.go:81] duration metric: took 359.7003ms for pod "kube-proxy-c4z7f" in "kube-system" namespace to be "Ready" ...
	I0719 05:36:55.573373    2708 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mjv8l" in "kube-system" namespace to be "Ready" ...
	I0719 05:36:55.771992    2708 request.go:629] Waited for 198.6171ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjv8l
	I0719 05:36:55.772354    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjv8l
	I0719 05:36:55.772354    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:55.772354    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:55.772354    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:55.775754    2708 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:36:55.775779    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:55.775845    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:55.775845    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:55 GMT
	I0719 05:36:55.775845    2708 round_trippers.go:580]     Audit-Id: 3779fa4e-a464-4ec4-aa26-837d758510cb
	I0719 05:36:55.775845    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:55.775845    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:55.775910    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:55.776228    2708 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mjv8l","generateName":"kube-proxy-","namespace":"kube-system","uid":"4d0f7d34-4031-46d3-a580-a2d080d9d335","resourceVersion":"602","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06c026b7-a7b7-4276-a86c-fc9c51f31e4e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06c026b7-a7b7-4276-a86c-fc9c51f31e4e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0719 05:36:55.974077    2708 request.go:629] Waited for 197.0456ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:55.974164    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:36:55.974164    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:55.974164    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:55.974164    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:55.978560    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:36:55.978560    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:55.978560    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:55 GMT
	I0719 05:36:55.979051    2708 round_trippers.go:580]     Audit-Id: ad366a05-04f3-4bbb-9729-884d3811cd49
	I0719 05:36:55.979051    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:55.979051    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:55.979051    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:55.979051    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:55.979201    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"626","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-atta [truncated 3144 chars]
	I0719 05:36:55.979770    2708 pod_ready.go:92] pod "kube-proxy-mjv8l" in "kube-system" namespace has status "Ready":"True"
	I0719 05:36:55.979770    2708 pod_ready.go:81] duration metric: took 406.3922ms for pod "kube-proxy-mjv8l" in "kube-system" namespace to be "Ready" ...
	I0719 05:36:55.979770    2708 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:36:56.164348    2708 request.go:629] Waited for 184.1658ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-761300
	I0719 05:36:56.164348    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-761300
	I0719 05:36:56.164596    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:56.164596    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:56.164596    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:56.176717    2708 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0719 05:36:56.176717    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:56.176717    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:56.176717    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:56.176717    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:56 GMT
	I0719 05:36:56.176717    2708 round_trippers.go:580]     Audit-Id: fa95bd11-a1fb-4f08-a36d-45ab46a6489a
	I0719 05:36:56.176717    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:56.176717    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:56.176717    2708 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-761300","namespace":"kube-system","uid":"49a739d1-1ae3-4a41-aebc-0eb7b2b4f242","resourceVersion":"287","creationTimestamp":"2024-07-19T05:33:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"baa57cf06d1c9cb3264d7de745e86d00","kubernetes.io/config.mirror":"baa57cf06d1c9cb3264d7de745e86d00","kubernetes.io/config.seen":"2024-07-19T05:33:02.001209067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0719 05:36:56.366924    2708 request.go:629] Waited for 188.9777ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:36:56.367065    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes/multinode-761300
	I0719 05:36:56.367065    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:56.367065    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:56.367065    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:56.370664    2708 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:36:56.370664    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:56.370664    2708 round_trippers.go:580]     Audit-Id: 04105459-7bbd-4a3e-9475-291273767b91
	I0719 05:36:56.370664    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:56.370664    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:56.370664    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:56.370664    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:56.371313    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:56 GMT
	I0719 05:36:56.371497    2708 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"394","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0719 05:36:56.371497    2708 pod_ready.go:92] pod "kube-scheduler-multinode-761300" in "kube-system" namespace has status "Ready":"True"
	I0719 05:36:56.371497    2708 pod_ready.go:81] duration metric: took 391.7224ms for pod "kube-scheduler-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:36:56.371497    2708 pod_ready.go:38] duration metric: took 1.2023625s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:36:56.372026    2708 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 05:36:56.383126    2708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 05:36:56.404654    2708 system_svc.go:56] duration metric: took 32.6285ms WaitForService to wait for kubelet
	I0719 05:36:56.405109    2708 kubeadm.go:582] duration metric: took 29.5012462s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 05:36:56.405109    2708 node_conditions.go:102] verifying NodePressure condition ...
	I0719 05:36:56.570180    2708 request.go:629] Waited for 164.6616ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.16:8443/api/v1/nodes
	I0719 05:36:56.570180    2708 round_trippers.go:463] GET https://172.28.162.16:8443/api/v1/nodes
	I0719 05:36:56.570307    2708 round_trippers.go:469] Request Headers:
	I0719 05:36:56.570307    2708 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:36:56.570350    2708 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:36:56.574866    2708 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:36:56.575459    2708 round_trippers.go:577] Response Headers:
	I0719 05:36:56.575459    2708 round_trippers.go:580]     Audit-Id: 657ddf45-8ce2-495f-8db2-f672ed8d79fc
	I0719 05:36:56.575459    2708 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:36:56.575459    2708 round_trippers.go:580]     Content-Type: application/json
	I0719 05:36:56.575459    2708 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:36:56.575459    2708 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:36:56.575459    2708 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:36:56 GMT
	I0719 05:36:56.575750    2708 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"628"},"items":[{"metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"394","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9661 chars]
	I0719 05:36:56.576740    2708 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 05:36:56.576801    2708 node_conditions.go:123] node cpu capacity is 2
	I0719 05:36:56.576801    2708 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 05:36:56.576923    2708 node_conditions.go:123] node cpu capacity is 2
	I0719 05:36:56.576923    2708 node_conditions.go:105] duration metric: took 171.7385ms to run NodePressure ...
	I0719 05:36:56.576923    2708 start.go:241] waiting for startup goroutines ...
	I0719 05:36:56.576923    2708 start.go:255] writing updated cluster config ...
	I0719 05:36:56.589647    2708 ssh_runner.go:195] Run: rm -f paused
	I0719 05:36:56.738957    2708 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0719 05:36:56.743225    2708 out.go:177] * Done! kubectl is now configured to use "multinode-761300" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 19 05:33:37 multinode-761300 dockerd[1434]: time="2024-07-19T05:33:37.305085886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 05:33:37 multinode-761300 dockerd[1434]: time="2024-07-19T05:33:37.334281829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 05:33:37 multinode-761300 dockerd[1434]: time="2024-07-19T05:33:37.334403029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 05:33:37 multinode-761300 dockerd[1434]: time="2024-07-19T05:33:37.334448729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 05:33:37 multinode-761300 dockerd[1434]: time="2024-07-19T05:33:37.334733730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 05:33:37 multinode-761300 cri-dockerd[1327]: time="2024-07-19T05:33:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2db86aab06c2b0efc436499c9cb80cd11a3e4821ca40499d5c828e62f158840e/resolv.conf as [nameserver 172.28.160.1]"
	Jul 19 05:33:37 multinode-761300 cri-dockerd[1327]: time="2024-07-19T05:33:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8880cece050b30dda49fb0c3c986c6e32386f5b334e4c22dc45f985c21b67b81/resolv.conf as [nameserver 172.28.160.1]"
	Jul 19 05:33:37 multinode-761300 dockerd[1434]: time="2024-07-19T05:33:37.673545243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 05:33:37 multinode-761300 dockerd[1434]: time="2024-07-19T05:33:37.674082846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 05:33:37 multinode-761300 dockerd[1434]: time="2024-07-19T05:33:37.674190047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 05:33:37 multinode-761300 dockerd[1434]: time="2024-07-19T05:33:37.674474249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 05:33:37 multinode-761300 dockerd[1434]: time="2024-07-19T05:33:37.824316107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 05:33:37 multinode-761300 dockerd[1434]: time="2024-07-19T05:33:37.824545509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 05:33:37 multinode-761300 dockerd[1434]: time="2024-07-19T05:33:37.824564309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 05:33:37 multinode-761300 dockerd[1434]: time="2024-07-19T05:33:37.825037712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 05:37:22 multinode-761300 dockerd[1434]: time="2024-07-19T05:37:22.343498444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 05:37:22 multinode-761300 dockerd[1434]: time="2024-07-19T05:37:22.343785046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 05:37:22 multinode-761300 dockerd[1434]: time="2024-07-19T05:37:22.343810146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 05:37:22 multinode-761300 dockerd[1434]: time="2024-07-19T05:37:22.344024449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 05:37:22 multinode-761300 cri-dockerd[1327]: time="2024-07-19T05:37:22Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3376af93be166ee39f391275321b590a523397091de3ae1d3d42f320447ff1cd/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 19 05:37:24 multinode-761300 cri-dockerd[1327]: time="2024-07-19T05:37:24Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 19 05:37:24 multinode-761300 dockerd[1434]: time="2024-07-19T05:37:24.418452339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 05:37:24 multinode-761300 dockerd[1434]: time="2024-07-19T05:37:24.418555740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 05:37:24 multinode-761300 dockerd[1434]: time="2024-07-19T05:37:24.419510152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 05:37:24 multinode-761300 dockerd[1434]: time="2024-07-19T05:37:24.419854956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4a5a7f7d7c88b       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   48 seconds ago      Running             busybox                   0                   3376af93be166       busybox-fc5497c4f-n4tql
	17479f193bde6       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   0                   8880cece050b3       coredns-7db6d8ff4d-hw9kh
	7992ac3e32925       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   2db86aab06c2b       storage-provisioner
	81297ef97ccfe       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              4 minutes ago       Running             kindnet-cni               0                   342774c2cfe86       kindnet-dj497
	c7f3e45f7ac5a       55bb025d2cfa5                                                                                         4 minutes ago       Running             kube-proxy                0                   605bd6887ea94       kube-proxy-c4z7f
	1e25c1f162f5c       3edc18e7b7672                                                                                         5 minutes ago       Running             kube-scheduler            0                   b8966b015c45c       kube-scheduler-multinode-761300
	86b38e87981e5       76932a3b37d7e                                                                                         5 minutes ago       Running             kube-controller-manager   0                   20495b8d48375       kube-controller-manager-multinode-761300
	d59292a30318a       3861cfcd7c04c                                                                                         5 minutes ago       Running             etcd                      0                   9afe226cce244       etcd-multinode-761300
	d8ebf4b1a3d90       1f6d574d502f3                                                                                         5 minutes ago       Running             kube-apiserver            0                   44cdc617bc650       kube-apiserver-multinode-761300
	
	
	==> coredns [17479f193bde] <==
	[INFO] 10.244.1.2:59610 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098101s
	[INFO] 10.244.0.3:51723 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000099002s
	[INFO] 10.244.0.3:56803 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000119402s
	[INFO] 10.244.0.3:49469 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000167302s
	[INFO] 10.244.0.3:55677 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113101s
	[INFO] 10.244.0.3:45799 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000153001s
	[INFO] 10.244.0.3:34957 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000269103s
	[INFO] 10.244.0.3:42013 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098001s
	[INFO] 10.244.0.3:52144 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000172802s
	[INFO] 10.244.1.2:33742 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163902s
	[INFO] 10.244.1.2:34795 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000317004s
	[INFO] 10.244.1.2:43217 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137402s
	[INFO] 10.244.1.2:55546 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000224302s
	[INFO] 10.244.0.3:55937 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000239803s
	[INFO] 10.244.0.3:48596 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068101s
	[INFO] 10.244.0.3:47339 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077101s
	[INFO] 10.244.0.3:45789 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163303s
	[INFO] 10.244.1.2:53057 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140801s
	[INFO] 10.244.1.2:49936 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138202s
	[INFO] 10.244.1.2:51934 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000086001s
	[INFO] 10.244.1.2:50345 - 5 "PTR IN 1.160.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093801s
	[INFO] 10.244.0.3:38065 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244803s
	[INFO] 10.244.0.3:42402 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000454005s
	[INFO] 10.244.0.3:54728 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000466406s
	[INFO] 10.244.0.3:52215 - 5 "PTR IN 1.160.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000087501s
	
	
	==> describe nodes <==
	Name:               multinode-761300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-761300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=multinode-761300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T05_33_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 05:32:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-761300
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 05:38:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 05:37:37 +0000   Fri, 19 Jul 2024 05:32:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 05:37:37 +0000   Fri, 19 Jul 2024 05:32:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 05:37:37 +0000   Fri, 19 Jul 2024 05:32:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 05:37:37 +0000   Fri, 19 Jul 2024 05:33:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.162.16
	  Hostname:    multinode-761300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 55dbe98376304f17b8add6de006da546
	  System UUID:                802f23c7-7e66-2447-8cf2-4f28d0672512
	  Boot ID:                    6897c535-9127-4845-b9df-31b329469ce1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-n4tql                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 coredns-7db6d8ff4d-hw9kh                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m56s
	  kube-system                 etcd-multinode-761300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m13s
	  kube-system                 kindnet-dj497                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m57s
	  kube-system                 kube-apiserver-multinode-761300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-controller-manager-multinode-761300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 kube-proxy-c4z7f                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m57s
	  kube-system                 kube-scheduler-multinode-761300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m10s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m55s  kube-proxy       
	  Normal  Starting                 5m11s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m10s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m10s  kubelet          Node multinode-761300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m10s  kubelet          Node multinode-761300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m10s  kubelet          Node multinode-761300 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m58s  node-controller  Node multinode-761300 event: Registered Node multinode-761300 in Controller
	  Normal  NodeReady                4m36s  kubelet          Node multinode-761300 status is now: NodeReady
	
	
	Name:               multinode-761300-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-761300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=multinode-761300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T05_36_26_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 05:36:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-761300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 05:38:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 05:37:26 +0000   Fri, 19 Jul 2024 05:36:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 05:37:26 +0000   Fri, 19 Jul 2024 05:36:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 05:37:26 +0000   Fri, 19 Jul 2024 05:36:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 05:37:26 +0000   Fri, 19 Jul 2024 05:36:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.167.151
	  Hostname:    multinode-761300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c918f138aaff43f498eaacc539897bb1
	  System UUID:                62e15326-3b75-2c4c-8a83-de31ba1535c2
	  Boot ID:                    a1a8f8ea-3af9-4d28-b846-189f682f48fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-22cdf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  kube-system                 kindnet-6wxhn              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      107s
	  kube-system                 kube-proxy-mjv8l           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         107s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 94s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  107s (x2 over 107s)  kubelet          Node multinode-761300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x2 over 107s)  kubelet          Node multinode-761300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x2 over 107s)  kubelet          Node multinode-761300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  107s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           103s                 node-controller  Node multinode-761300-m02 event: Registered Node multinode-761300-m02 in Controller
	  Normal  NodeReady                78s                  kubelet          Node multinode-761300-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[Jul19 05:31] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +50.794740] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.178077] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[Jul19 05:32] systemd-fstab-generator[1001]: Ignoring "noauto" option for root device
	[  +0.102122] kauditd_printk_skb: 65 callbacks suppressed
	[  +0.538044] systemd-fstab-generator[1041]: Ignoring "noauto" option for root device
	[  +0.194863] systemd-fstab-generator[1053]: Ignoring "noauto" option for root device
	[  +0.234430] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	[  +2.867833] systemd-fstab-generator[1280]: Ignoring "noauto" option for root device
	[  +0.181155] systemd-fstab-generator[1292]: Ignoring "noauto" option for root device
	[  +0.206807] systemd-fstab-generator[1304]: Ignoring "noauto" option for root device
	[  +0.276871] systemd-fstab-generator[1320]: Ignoring "noauto" option for root device
	[ +11.205368] systemd-fstab-generator[1419]: Ignoring "noauto" option for root device
	[  +0.101247] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.724741] systemd-fstab-generator[1663]: Ignoring "noauto" option for root device
	[  +7.048743] systemd-fstab-generator[1864]: Ignoring "noauto" option for root device
	[  +0.114090] kauditd_printk_skb: 70 callbacks suppressed
	[Jul19 05:33] systemd-fstab-generator[2273]: Ignoring "noauto" option for root device
	[  +0.151206] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.041598] systemd-fstab-generator[2460]: Ignoring "noauto" option for root device
	[  +0.211215] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.184988] kauditd_printk_skb: 51 callbacks suppressed
	[Jul19 05:37] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [d59292a30318] <==
	{"level":"info","ts":"2024-07-19T05:32:56.653595Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T05:32:56.655988Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"600e41776d6e5bf4","local-member-id":"ea87c63def213d9a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T05:32:56.660726Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T05:32:56.660852Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T05:33:19.706603Z","caller":"traceutil/trace.go:171","msg":"trace[483533623] linearizableReadLoop","detail":"{readStateIndex:386; appliedIndex:385; }","duration":"201.131673ms","start":"2024-07-19T05:33:19.505447Z","end":"2024-07-19T05:33:19.706578Z","steps":["trace[483533623] 'read index received'  (duration: 200.911474ms)","trace[483533623] 'applied index is now lower than readState.Index'  (duration: 219.099µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T05:33:19.709261Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"203.778164ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-761300\" ","response":"range_response_count:1 size:4486"}
	{"level":"info","ts":"2024-07-19T05:33:19.709299Z","caller":"traceutil/trace.go:171","msg":"trace[1442082592] range","detail":"{range_begin:/registry/minions/multinode-761300; range_end:; response_count:1; response_revision:371; }","duration":"203.873463ms","start":"2024-07-19T05:33:19.505415Z","end":"2024-07-19T05:33:19.709289Z","steps":["trace[1442082592] 'agreement among raft nodes before linearized reading'  (duration: 203.775464ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:33:19.706928Z","caller":"traceutil/trace.go:171","msg":"trace[428048098] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"212.103933ms","start":"2024-07-19T05:33:19.49481Z","end":"2024-07-19T05:33:19.706914Z","steps":["trace[428048098] 'process raft request'  (duration: 211.553135ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:33:22.678993Z","caller":"traceutil/trace.go:171","msg":"trace[683277132] linearizableReadLoop","detail":"{readStateIndex:389; appliedIndex:388; }","duration":"177.990988ms","start":"2024-07-19T05:33:22.500978Z","end":"2024-07-19T05:33:22.678969Z","steps":["trace[683277132] 'read index received'  (duration: 177.52279ms)","trace[683277132] 'applied index is now lower than readState.Index'  (duration: 467.298µs)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T05:33:22.679392Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.396488ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-761300\" ","response":"range_response_count:1 size:4486"}
	{"level":"info","ts":"2024-07-19T05:33:22.679431Z","caller":"traceutil/trace.go:171","msg":"trace[1923619889] range","detail":"{range_begin:/registry/minions/multinode-761300; range_end:; response_count:1; response_revision:373; }","duration":"178.485087ms","start":"2024-07-19T05:33:22.500936Z","end":"2024-07-19T05:33:22.679421Z","steps":["trace[1923619889] 'agreement among raft nodes before linearized reading'  (duration: 178.229788ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:33:22.68025Z","caller":"traceutil/trace.go:171","msg":"trace[1074284699] transaction","detail":"{read_only:false; response_revision:373; number_of_response:1; }","duration":"194.98635ms","start":"2024-07-19T05:33:22.485236Z","end":"2024-07-19T05:33:22.680223Z","steps":["trace[1074284699] 'process raft request'  (duration: 193.595853ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:34:46.556475Z","caller":"traceutil/trace.go:171","msg":"trace[301407970] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"101.08492ms","start":"2024-07-19T05:34:46.45537Z","end":"2024-07-19T05:34:46.556455Z","steps":["trace[301407970] 'process raft request'  (duration: 100.929618ms)"],"step_count":1}
	{"level":"info","ts":"2024-07-19T05:36:31.216319Z","caller":"traceutil/trace.go:171","msg":"trace[884444471] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"103.676788ms","start":"2024-07-19T05:36:31.112624Z","end":"2024-07-19T05:36:31.216301Z","steps":["trace[884444471] 'process raft request'  (duration: 83.268574ms)","trace[884444471] 'compare'  (duration: 20.280812ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T05:36:37.05248Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"406.985615ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-07-19T05:36:37.05258Z","caller":"traceutil/trace.go:171","msg":"trace[426685998] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:0; response_revision:596; }","duration":"407.123616ms","start":"2024-07-19T05:36:36.64544Z","end":"2024-07-19T05:36:37.052563Z","steps":["trace[426685998] 'count revisions from in-memory index tree'  (duration: 406.911815ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T05:36:37.052632Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T05:36:36.645425Z","time spent":"407.196617ms","remote":"127.0.0.1:49996","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":2,"response size":30,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" count_only:true "}
	{"level":"warn","ts":"2024-07-19T05:36:37.057573Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.218718ms","expected-duration":"100ms","prefix":"","request":"header:<ID:4439019577747374142 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-761300\" mod_revision:578 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-761300\" value_size:496 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-761300\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-07-19T05:36:37.057848Z","caller":"traceutil/trace.go:171","msg":"trace[1091769469] linearizableReadLoop","detail":"{readStateIndex:654; appliedIndex:653; }","duration":"374.081074ms","start":"2024-07-19T05:36:36.683757Z","end":"2024-07-19T05:36:37.057838Z","steps":["trace[1091769469] 'read index received'  (duration: 158.730944ms)","trace[1091769469] 'applied index is now lower than readState.Index'  (duration: 215.34903ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T05:36:37.058526Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"374.821682ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-761300-m02\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-07-19T05:36:37.058834Z","caller":"traceutil/trace.go:171","msg":"trace[14710090] range","detail":"{range_begin:/registry/minions/multinode-761300-m02; range_end:; response_count:1; response_revision:597; }","duration":"375.180386ms","start":"2024-07-19T05:36:36.683642Z","end":"2024-07-19T05:36:37.058822Z","steps":["trace[14710090] 'agreement among raft nodes before linearized reading'  (duration: 374.793382ms)"],"step_count":1}
	{"level":"warn","ts":"2024-07-19T05:36:37.058866Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T05:36:36.683635Z","time spent":"375.220086ms","remote":"127.0.0.1:49996","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3172,"request content":"key:\"/registry/minions/multinode-761300-m02\" "}
	{"level":"info","ts":"2024-07-19T05:36:37.058582Z","caller":"traceutil/trace.go:171","msg":"trace[1675600478] transaction","detail":"{read_only:false; response_revision:597; number_of_response:1; }","duration":"406.938814ms","start":"2024-07-19T05:36:36.651633Z","end":"2024-07-19T05:36:37.058572Z","steps":["trace[1675600478] 'process raft request'  (duration: 190.903078ms)","trace[1675600478] 'compare'  (duration: 210.923084ms)"],"step_count":2}
	{"level":"warn","ts":"2024-07-19T05:36:37.059401Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-19T05:36:36.651621Z","time spent":"407.741023ms","remote":"127.0.0.1:50090","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":553,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/multinode-761300\" mod_revision:578 > success:<request_put:<key:\"/registry/leases/kube-node-lease/multinode-761300\" value_size:496 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/multinode-761300\" > >"}
	{"level":"info","ts":"2024-07-19T05:36:37.589011Z","caller":"traceutil/trace.go:171","msg":"trace[573832193] transaction","detail":"{read_only:false; response_revision:598; number_of_response:1; }","duration":"119.467035ms","start":"2024-07-19T05:36:37.469506Z","end":"2024-07-19T05:36:37.588973Z","steps":["trace[573832193] 'process raft request'  (duration: 119.228233ms)"],"step_count":1}
	
	
	==> kernel <==
	 05:38:12 up 7 min,  0 users,  load average: 0.20, 0.24, 0.12
	Linux multinode-761300 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [81297ef97ccf] <==
	I0719 05:37:05.464508       1 main.go:326] Node multinode-761300-m02 has CIDR [10.244.1.0/24] 
	I0719 05:37:15.463178       1 main.go:299] Handling node with IPs: map[172.28.162.16:{}]
	I0719 05:37:15.463298       1 main.go:303] handling current node
	I0719 05:37:15.463335       1 main.go:299] Handling node with IPs: map[172.28.167.151:{}]
	I0719 05:37:15.463343       1 main.go:326] Node multinode-761300-m02 has CIDR [10.244.1.0/24] 
	I0719 05:37:25.456570       1 main.go:299] Handling node with IPs: map[172.28.162.16:{}]
	I0719 05:37:25.456990       1 main.go:303] handling current node
	I0719 05:37:25.457265       1 main.go:299] Handling node with IPs: map[172.28.167.151:{}]
	I0719 05:37:25.457296       1 main.go:326] Node multinode-761300-m02 has CIDR [10.244.1.0/24] 
	I0719 05:37:35.455766       1 main.go:299] Handling node with IPs: map[172.28.162.16:{}]
	I0719 05:37:35.455849       1 main.go:303] handling current node
	I0719 05:37:35.456182       1 main.go:299] Handling node with IPs: map[172.28.167.151:{}]
	I0719 05:37:35.456243       1 main.go:326] Node multinode-761300-m02 has CIDR [10.244.1.0/24] 
	I0719 05:37:45.462444       1 main.go:299] Handling node with IPs: map[172.28.167.151:{}]
	I0719 05:37:45.462555       1 main.go:326] Node multinode-761300-m02 has CIDR [10.244.1.0/24] 
	I0719 05:37:45.462873       1 main.go:299] Handling node with IPs: map[172.28.162.16:{}]
	I0719 05:37:45.462907       1 main.go:303] handling current node
	I0719 05:37:55.457782       1 main.go:299] Handling node with IPs: map[172.28.162.16:{}]
	I0719 05:37:55.457897       1 main.go:303] handling current node
	I0719 05:37:55.457917       1 main.go:299] Handling node with IPs: map[172.28.167.151:{}]
	I0719 05:37:55.457925       1 main.go:326] Node multinode-761300-m02 has CIDR [10.244.1.0/24] 
	I0719 05:38:05.465290       1 main.go:299] Handling node with IPs: map[172.28.162.16:{}]
	I0719 05:38:05.465790       1 main.go:303] handling current node
	I0719 05:38:05.466013       1 main.go:299] Handling node with IPs: map[172.28.167.151:{}]
	I0719 05:38:05.466072       1 main.go:326] Node multinode-761300-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [d8ebf4b1a3d9] <==
	I0719 05:32:58.936105       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 05:32:59.547470       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0719 05:32:59.556954       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0719 05:32:59.557292       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 05:33:00.744607       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 05:33:00.836061       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 05:33:00.960752       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0719 05:33:00.980270       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.162.16]
	I0719 05:33:00.981212       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 05:33:00.989442       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 05:33:01.640950       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 05:33:01.926531       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 05:33:01.954468       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0719 05:33:01.988844       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 05:33:15.588419       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0719 05:33:15.883518       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0719 05:37:27.448891       1 conn.go:339] Error on socket receive: read tcp 172.28.162.16:8443->172.28.160.1:60610: use of closed network connection
	E0719 05:37:27.998608       1 conn.go:339] Error on socket receive: read tcp 172.28.162.16:8443->172.28.160.1:60612: use of closed network connection
	E0719 05:37:28.614427       1 conn.go:339] Error on socket receive: read tcp 172.28.162.16:8443->172.28.160.1:60614: use of closed network connection
	E0719 05:37:29.154955       1 conn.go:339] Error on socket receive: read tcp 172.28.162.16:8443->172.28.160.1:60616: use of closed network connection
	E0719 05:37:29.682904       1 conn.go:339] Error on socket receive: read tcp 172.28.162.16:8443->172.28.160.1:60618: use of closed network connection
	E0719 05:37:30.218885       1 conn.go:339] Error on socket receive: read tcp 172.28.162.16:8443->172.28.160.1:60620: use of closed network connection
	E0719 05:37:31.160010       1 conn.go:339] Error on socket receive: read tcp 172.28.162.16:8443->172.28.160.1:60623: use of closed network connection
	E0719 05:37:42.228474       1 conn.go:339] Error on socket receive: read tcp 172.28.162.16:8443->172.28.160.1:60629: use of closed network connection
	E0719 05:37:52.732193       1 conn.go:339] Error on socket receive: read tcp 172.28.162.16:8443->172.28.160.1:60631: use of closed network connection
	
	
	==> kube-controller-manager [86b38e87981e] <==
	I0719 05:33:15.552813       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0719 05:33:16.230820       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="304.294674ms"
	I0719 05:33:16.278637       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.763252ms"
	I0719 05:33:16.310448       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.692536ms"
	I0719 05:33:16.310910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.299µs"
	I0719 05:33:16.691768       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="116.034396ms"
	I0719 05:33:16.707992       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="16.184916ms"
	I0719 05:33:16.708348       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50µs"
	I0719 05:33:36.681067       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.7µs"
	I0719 05:33:36.707876       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="154.7µs"
	I0719 05:33:38.609786       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.903µs"
	I0719 05:33:38.665396       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.226653ms"
	I0719 05:33:38.666935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.201µs"
	I0719 05:33:39.887253       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0719 05:36:25.905178       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-761300-m02\" does not exist"
	I0719 05:36:25.920845       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-761300-m02" podCIDRs=["10.244.1.0/24"]
	I0719 05:36:29.924369       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-761300-m02"
	I0719 05:36:54.913426       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-761300-m02"
	I0719 05:37:21.760572       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.647054ms"
	I0719 05:37:21.795725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.419919ms"
	I0719 05:37:21.795947       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.5µs"
	I0719 05:37:24.665622       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.047725ms"
	I0719 05:37:24.665742       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31µs"
	I0719 05:37:24.766270       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.397567ms"
	I0719 05:37:24.767646       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.1µs"
	
	
	==> kube-proxy [c7f3e45f7ac5] <==
	I0719 05:33:17.247310       1 server_linux.go:69] "Using iptables proxy"
	I0719 05:33:17.266745       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.162.16"]
	I0719 05:33:17.335859       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 05:33:17.336129       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 05:33:17.336392       1 server_linux.go:165] "Using iptables Proxier"
	I0719 05:33:17.340299       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 05:33:17.341598       1 server.go:872] "Version info" version="v1.30.3"
	I0719 05:33:17.341834       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 05:33:17.343550       1 config.go:192] "Starting service config controller"
	I0719 05:33:17.343610       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 05:33:17.343638       1 config.go:101] "Starting endpoint slice config controller"
	I0719 05:33:17.343771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 05:33:17.345233       1 config.go:319] "Starting node config controller"
	I0719 05:33:17.345471       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 05:33:17.444786       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 05:33:17.444830       1 shared_informer.go:320] Caches are synced for service config
	I0719 05:33:17.449592       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1e25c1f162f5] <==
	W0719 05:32:59.750772       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 05:32:59.751059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 05:32:59.776436       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 05:32:59.777003       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 05:32:59.839535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 05:32:59.839645       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 05:32:59.877145       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 05:32:59.877192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 05:32:59.877377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 05:32:59.877888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 05:32:59.890177       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 05:32:59.890220       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 05:32:59.892022       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 05:32:59.894628       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 05:33:00.010258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 05:33:00.010397       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 05:33:00.033374       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 05:33:00.033622       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 05:33:00.069187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 05:33:00.069640       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 05:33:00.091838       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 05:33:00.092390       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 05:33:00.099779       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 05:33:00.099822       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0719 05:33:01.680818       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 05:34:02 multinode-761300 kubelet[2280]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 05:35:02 multinode-761300 kubelet[2280]: E0719 05:35:02.150907    2280 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 05:35:02 multinode-761300 kubelet[2280]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 05:35:02 multinode-761300 kubelet[2280]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 05:35:02 multinode-761300 kubelet[2280]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 05:35:02 multinode-761300 kubelet[2280]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 05:36:02 multinode-761300 kubelet[2280]: E0719 05:36:02.151102    2280 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 05:36:02 multinode-761300 kubelet[2280]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 05:36:02 multinode-761300 kubelet[2280]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 05:36:02 multinode-761300 kubelet[2280]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 05:36:02 multinode-761300 kubelet[2280]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 05:37:02 multinode-761300 kubelet[2280]: E0719 05:37:02.149123    2280 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 05:37:02 multinode-761300 kubelet[2280]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 05:37:02 multinode-761300 kubelet[2280]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 05:37:02 multinode-761300 kubelet[2280]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 05:37:02 multinode-761300 kubelet[2280]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 05:37:21 multinode-761300 kubelet[2280]: I0719 05:37:21.749008    2280 topology_manager.go:215] "Topology Admit Handler" podUID="f8302851-41b4-4d49-90b7-7a98190dfa1d" podNamespace="default" podName="busybox-fc5497c4f-n4tql"
	Jul 19 05:37:21 multinode-761300 kubelet[2280]: I0719 05:37:21.902286    2280 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfthb\" (UniqueName: \"kubernetes.io/projected/f8302851-41b4-4d49-90b7-7a98190dfa1d-kube-api-access-jfthb\") pod \"busybox-fc5497c4f-n4tql\" (UID: \"f8302851-41b4-4d49-90b7-7a98190dfa1d\") " pod="default/busybox-fc5497c4f-n4tql"
	Jul 19 05:37:24 multinode-761300 kubelet[2280]: I0719 05:37:24.750234    2280 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-n4tql" podStartSLOduration=2.176132873 podStartE2EDuration="3.75019467s" podCreationTimestamp="2024-07-19 05:37:21 +0000 UTC" firstStartedPulling="2024-07-19 05:37:22.560798711 +0000 UTC m=+260.765418852" lastFinishedPulling="2024-07-19 05:37:24.134860508 +0000 UTC m=+262.339480649" observedRunningTime="2024-07-19 05:37:24.749624063 +0000 UTC m=+262.954244204" watchObservedRunningTime="2024-07-19 05:37:24.75019467 +0000 UTC m=+262.954814811"
	Jul 19 05:37:52 multinode-761300 kubelet[2280]: E0719 05:37:52.733511    2280 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:46696->127.0.0.1:44431: write tcp 127.0.0.1:46696->127.0.0.1:44431: write: broken pipe
	Jul 19 05:38:02 multinode-761300 kubelet[2280]: E0719 05:38:02.149868    2280 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 05:38:02 multinode-761300 kubelet[2280]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 05:38:02 multinode-761300 kubelet[2280]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 05:38:02 multinode-761300 kubelet[2280]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 05:38:02 multinode-761300 kubelet[2280]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:38:04.697602   11524 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-761300 -n multinode-761300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-761300 -n multinode-761300: (12.0822926s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-761300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (56.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (426.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-761300
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-761300
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-761300: (1m39.9735242s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-761300 --wait=true -v=8 --alsologtostderr
E0719 05:55:10.168059    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
E0719 05:58:13.397728    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-761300 --wait=true -v=8 --alsologtostderr: exit status 1 (4m48.5094897s)

                                                
                                                
-- stdout --
	* [multinode-761300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-761300" primary control-plane node in "multinode-761300" cluster
	* Restarting existing hyperv VM for "multinode-761300" ...
	* Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	
	* Starting "multinode-761300-m02" worker node in "multinode-761300" cluster
	* Restarting existing hyperv VM for "multinode-761300-m02" ...
	* Found network options:
	  - NO_PROXY=172.28.162.149
	  - NO_PROXY=172.28.162.149
	* Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	  - env NO_PROXY=172.28.162.149

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:55:03.941961    5884 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0719 05:55:04.012004    5884 out.go:291] Setting OutFile to fd 628 ...
	I0719 05:55:04.013047    5884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:55:04.013047    5884 out.go:304] Setting ErrFile to fd 508...
	I0719 05:55:04.013047    5884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:55:04.035369    5884 out.go:298] Setting JSON to false
	I0719 05:55:04.041912    5884 start.go:129] hostinfo: {"hostname":"minikube6","uptime":27530,"bootTime":1721340973,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0719 05:55:04.042155    5884 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 05:55:04.089782    5884 out.go:177] * [multinode-761300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 05:55:04.139392    5884 notify.go:220] Checking for updates...
	I0719 05:55:04.148458    5884 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 05:55:04.155665    5884 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 05:55:04.200939    5884 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0719 05:55:04.213634    5884 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 05:55:04.228706    5884 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 05:55:04.243036    5884 config.go:182] Loaded profile config "multinode-761300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:55:04.243518    5884 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 05:55:09.846694    5884 out.go:177] * Using the hyperv driver based on existing profile
	I0719 05:55:09.859013    5884 start.go:297] selected driver: hyperv
	I0719 05:55:09.859550    5884 start.go:901] validating driver "hyperv" against &{Name:multinode-761300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.3 ClusterName:multinode-761300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.162.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.167.151 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.165.227 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:55:09.859802    5884 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 05:55:09.911405    5884 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 05:55:09.911405    5884 cni.go:84] Creating CNI manager for ""
	I0719 05:55:09.911405    5884 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 05:55:09.911405    5884 start.go:340] cluster config:
	{Name:multinode-761300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-761300 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.162.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.167.151 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.165.227 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:55:09.912291    5884 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:55:10.004135    5884 out.go:177] * Starting "multinode-761300" primary control-plane node in "multinode-761300" cluster
	I0719 05:55:10.037823    5884 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 05:55:10.038532    5884 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 05:55:10.038532    5884 cache.go:56] Caching tarball of preloaded images
	I0719 05:55:10.039070    5884 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 05:55:10.039151    5884 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 05:55:10.039522    5884 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\config.json ...
	I0719 05:55:10.042784    5884 start.go:360] acquireMachinesLock for multinode-761300: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 05:55:10.043025    5884 start.go:364] duration metric: took 241µs to acquireMachinesLock for "multinode-761300"
	I0719 05:55:10.043196    5884 start.go:96] Skipping create...Using existing machine configuration
	I0719 05:55:10.043274    5884 fix.go:54] fixHost starting: 
	I0719 05:55:10.044020    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:55:12.841116    5884 main.go:141] libmachine: [stdout =====>] : Off
	
	I0719 05:55:12.841291    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:12.841291    5884 fix.go:112] recreateIfNeeded on multinode-761300: state=Stopped err=<nil>
	W0719 05:55:12.841291    5884 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 05:55:12.878737    5884 out.go:177] * Restarting existing hyperv VM for "multinode-761300" ...
	I0719 05:55:12.902031    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-761300
	I0719 05:55:15.981672    5884 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:55:15.981672    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:15.981672    5884 main.go:141] libmachine: Waiting for host to start...
	I0719 05:55:15.981672    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:55:18.279507    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:55:18.280440    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:18.280525    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:55:20.834982    5884 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:55:20.834982    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:21.836048    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:55:24.091133    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:55:24.091207    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:24.091381    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:55:26.667159    5884 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:55:26.667931    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:27.678397    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:55:29.920551    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:55:29.920640    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:29.920758    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:55:32.540674    5884 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:55:32.540674    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:33.552900    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:55:35.776889    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:55:35.777737    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:35.777737    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:55:38.350455    5884 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:55:38.350455    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:39.360647    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:55:41.645849    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:55:41.646075    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:41.646321    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:55:44.244611    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:55:44.245439    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:44.248355    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:55:46.401394    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:55:46.401394    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:46.402256    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:55:48.938183    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:55:48.938183    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:48.939442    5884 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\config.json ...
	I0719 05:55:48.941544    5884 machine.go:94] provisionDockerMachine start ...
	I0719 05:55:48.941544    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:55:51.097273    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:55:51.098091    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:51.098091    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:55:53.671346    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:55:53.671346    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:53.678441    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:55:53.678619    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.149 22 <nil> <nil>}
	I0719 05:55:53.679242    5884 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 05:55:53.812638    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 05:55:53.812741    5884 buildroot.go:166] provisioning hostname "multinode-761300"
	I0719 05:55:53.812741    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:55:55.993386    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:55:55.993386    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:55.994634    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:55:58.590472    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:55:58.590564    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:58.595734    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:55:58.596453    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.149 22 <nil> <nil>}
	I0719 05:55:58.596453    5884 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-761300 && echo "multinode-761300" | sudo tee /etc/hostname
	I0719 05:55:58.750725    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-761300
	
	I0719 05:55:58.750725    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:00.902162    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:00.902162    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:00.902454    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:03.476894    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:03.477691    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:03.482544    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:56:03.483220    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.149 22 <nil> <nil>}
	I0719 05:56:03.483220    5884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-761300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-761300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-761300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 05:56:03.628938    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 05:56:03.628938    5884 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 05:56:03.628938    5884 buildroot.go:174] setting up certificates
	I0719 05:56:03.628938    5884 provision.go:84] configureAuth start
	I0719 05:56:03.629546    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:05.777010    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:05.777509    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:05.777509    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:08.385330    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:08.385330    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:08.386425    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:10.551594    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:10.551594    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:10.552494    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:13.166121    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:13.166236    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:13.166236    5884 provision.go:143] copyHostCerts
	I0719 05:56:13.166236    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0719 05:56:13.166841    5884 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0719 05:56:13.167160    5884 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0719 05:56:13.167216    5884 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 05:56:13.168689    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0719 05:56:13.168689    5884 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0719 05:56:13.169258    5884 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0719 05:56:13.169546    5884 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 05:56:13.170461    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0719 05:56:13.170461    5884 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0719 05:56:13.170461    5884 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0719 05:56:13.171328    5884 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 05:56:13.172164    5884 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-761300 san=[127.0.0.1 172.28.162.149 localhost minikube multinode-761300]
	I0719 05:56:13.327115    5884 provision.go:177] copyRemoteCerts
	I0719 05:56:13.337113    5884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 05:56:13.337113    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:15.518132    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:15.518228    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:15.518368    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:18.097256    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:18.097256    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:18.098580    5884 sshutil.go:53] new ssh client: &{IP:172.28.162.149 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\id_rsa Username:docker}
	I0719 05:56:18.219353    5884 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8821808s)
	I0719 05:56:18.219353    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0719 05:56:18.221539    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 05:56:18.276256    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0719 05:56:18.276868    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 05:56:18.322300    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0719 05:56:18.322300    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0719 05:56:18.366948    5884 provision.go:87] duration metric: took 14.7377545s to configureAuth
	I0719 05:56:18.366948    5884 buildroot.go:189] setting minikube options for container-runtime
	I0719 05:56:18.367180    5884 config.go:182] Loaded profile config "multinode-761300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:56:18.367726    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:20.530074    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:20.530590    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:20.530642    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:23.128825    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:23.129130    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:23.134539    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:56:23.135413    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.149 22 <nil> <nil>}
	I0719 05:56:23.135413    5884 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 05:56:23.279177    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 05:56:23.279301    5884 buildroot.go:70] root file system type: tmpfs
	I0719 05:56:23.279508    5884 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 05:56:23.279645    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:25.452546    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:25.452956    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:25.453076    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:28.109487    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:28.110273    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:28.115826    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:56:28.116355    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.149 22 <nil> <nil>}
	I0719 05:56:28.116498    5884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 05:56:28.275934    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 05:56:28.275934    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:30.441674    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:30.442142    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:30.442305    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:33.041217    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:33.041527    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:33.046939    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:56:33.047835    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.149 22 <nil> <nil>}
	I0719 05:56:33.047835    5884 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 05:56:35.687480    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 05:56:35.687480    5884 machine.go:97] duration metric: took 46.7453656s to provisionDockerMachine
	I0719 05:56:35.687480    5884 start.go:293] postStartSetup for "multinode-761300" (driver="hyperv")
	I0719 05:56:35.687480    5884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 05:56:35.700688    5884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 05:56:35.700688    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:37.879518    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:37.879518    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:37.879518    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:40.441837    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:40.442661    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:40.443047    5884 sshutil.go:53] new ssh client: &{IP:172.28.162.149 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\id_rsa Username:docker}
	I0719 05:56:40.560330    5884 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8595826s)
	I0719 05:56:40.571842    5884 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 05:56:40.580930    5884 command_runner.go:130] > NAME=Buildroot
	I0719 05:56:40.581056    5884 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 05:56:40.581056    5884 command_runner.go:130] > ID=buildroot
	I0719 05:56:40.581056    5884 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 05:56:40.581056    5884 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 05:56:40.581169    5884 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 05:56:40.581169    5884 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 05:56:40.581332    5884 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 05:56:40.582494    5884 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
	I0719 05:56:40.582494    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /etc/ssl/certs/96042.pem
	I0719 05:56:40.594117    5884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 05:56:40.617336    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
	I0719 05:56:40.671179    5884 start.go:296] duration metric: took 4.9836388s for postStartSetup
	I0719 05:56:40.671179    5884 fix.go:56] duration metric: took 1m30.6267998s for fixHost
	I0719 05:56:40.671710    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:42.853227    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:42.853820    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:42.853820    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:45.416457    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:45.416457    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:45.421787    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:56:45.422517    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.149 22 <nil> <nil>}
	I0719 05:56:45.422517    5884 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0719 05:56:45.564540    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721368605.583208225
	
	I0719 05:56:45.564678    5884 fix.go:216] guest clock: 1721368605.583208225
	I0719 05:56:45.564678    5884 fix.go:229] Guest: 2024-07-19 05:56:45.583208225 +0000 UTC Remote: 2024-07-19 05:56:40.6711797 +0000 UTC m=+96.816773801 (delta=4.912028525s)
	I0719 05:56:45.564832    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:47.720562    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:47.721609    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:47.721675    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:50.321979    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:50.323051    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:50.328976    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:56:50.328976    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.149 22 <nil> <nil>}
	I0719 05:56:50.329553    5884 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721368605
	I0719 05:56:50.470273    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 05:56:45 UTC 2024
	
	I0719 05:56:50.471250    5884 fix.go:236] clock set: Fri Jul 19 05:56:45 UTC 2024
	 (err=<nil>)
	I0719 05:56:50.471250    5884 start.go:83] releasing machines lock for "multinode-761300", held for 1m40.4269993s
	I0719 05:56:50.471578    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:52.656363    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:52.657128    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:52.657230    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:55.228156    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:55.228365    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:55.232499    5884 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 05:56:55.232683    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:55.242086    5884 ssh_runner.go:195] Run: cat /version.json
	I0719 05:56:55.242086    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:57.488510    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:57.489025    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:57.489025    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:57.530920    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:57.531414    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:57.531414    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:57:00.207198    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:57:00.207356    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:57:00.207983    5884 sshutil.go:53] new ssh client: &{IP:172.28.162.149 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\id_rsa Username:docker}
	I0719 05:57:00.223549    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:57:00.223549    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:57:00.224809    5884 sshutil.go:53] new ssh client: &{IP:172.28.162.149 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\id_rsa Username:docker}
	I0719 05:57:00.323649    5884 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0719 05:57:00.323858    5884 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0912145s)
	W0719 05:57:00.323981    5884 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 05:57:00.330643    5884 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0719 05:57:00.331258    5884 ssh_runner.go:235] Completed: cat /version.json: (5.08911s)
	I0719 05:57:00.342305    5884 ssh_runner.go:195] Run: systemctl --version
	I0719 05:57:00.355256    5884 command_runner.go:130] > systemd 252 (252)
	I0719 05:57:00.355256    5884 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0719 05:57:00.366437    5884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 05:57:00.372977    5884 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0719 05:57:00.374194    5884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 05:57:00.385391    5884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 05:57:00.411179    5884 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0719 05:57:00.412354    5884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 05:57:00.412354    5884 start.go:495] detecting cgroup driver to use...
	I0719 05:57:00.412636    5884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 05:57:00.447406    5884 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	W0719 05:57:00.456989    5884 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 05:57:00.456989    5884 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 05:57:00.461173    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 05:57:00.490802    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 05:57:00.510381    5884 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 05:57:00.521155    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 05:57:00.552635    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 05:57:00.583343    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 05:57:00.614391    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 05:57:00.644901    5884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 05:57:00.678828    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 05:57:00.708113    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 05:57:00.737686    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 05:57:00.767259    5884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 05:57:00.784169    5884 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 05:57:00.795118    5884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 05:57:00.823917    5884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:57:01.021816    5884 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 05:57:01.056105    5884 start.go:495] detecting cgroup driver to use...
	I0719 05:57:01.067016    5884 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 05:57:01.090659    5884 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0719 05:57:01.090659    5884 command_runner.go:130] > [Unit]
	I0719 05:57:01.090659    5884 command_runner.go:130] > Description=Docker Application Container Engine
	I0719 05:57:01.090659    5884 command_runner.go:130] > Documentation=https://docs.docker.com
	I0719 05:57:01.090659    5884 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0719 05:57:01.091595    5884 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0719 05:57:01.091595    5884 command_runner.go:130] > StartLimitBurst=3
	I0719 05:57:01.091595    5884 command_runner.go:130] > StartLimitIntervalSec=60
	I0719 05:57:01.091595    5884 command_runner.go:130] > [Service]
	I0719 05:57:01.091595    5884 command_runner.go:130] > Type=notify
	I0719 05:57:01.091595    5884 command_runner.go:130] > Restart=on-failure
	I0719 05:57:01.091595    5884 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0719 05:57:01.091650    5884 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0719 05:57:01.091650    5884 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0719 05:57:01.091650    5884 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0719 05:57:01.091650    5884 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0719 05:57:01.091710    5884 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0719 05:57:01.091710    5884 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0719 05:57:01.091710    5884 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0719 05:57:01.091758    5884 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0719 05:57:01.091758    5884 command_runner.go:130] > ExecStart=
	I0719 05:57:01.091758    5884 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0719 05:57:01.091855    5884 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0719 05:57:01.091909    5884 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0719 05:57:01.091909    5884 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0719 05:57:01.091909    5884 command_runner.go:130] > LimitNOFILE=infinity
	I0719 05:57:01.091909    5884 command_runner.go:130] > LimitNPROC=infinity
	I0719 05:57:01.091909    5884 command_runner.go:130] > LimitCORE=infinity
	I0719 05:57:01.091964    5884 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0719 05:57:01.091964    5884 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0719 05:57:01.091964    5884 command_runner.go:130] > TasksMax=infinity
	I0719 05:57:01.091964    5884 command_runner.go:130] > TimeoutStartSec=0
	I0719 05:57:01.092008    5884 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0719 05:57:01.092008    5884 command_runner.go:130] > Delegate=yes
	I0719 05:57:01.092008    5884 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0719 05:57:01.092008    5884 command_runner.go:130] > KillMode=process
	I0719 05:57:01.092069    5884 command_runner.go:130] > [Install]
	I0719 05:57:01.092069    5884 command_runner.go:130] > WantedBy=multi-user.target
	I0719 05:57:01.104670    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 05:57:01.136259    5884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 05:57:01.180293    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 05:57:01.212731    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 05:57:01.246856    5884 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 05:57:01.305602    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 05:57:01.329393    5884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 05:57:01.361042    5884 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0719 05:57:01.374651    5884 ssh_runner.go:195] Run: which cri-dockerd
	I0719 05:57:01.379812    5884 command_runner.go:130] > /usr/bin/cri-dockerd
	I0719 05:57:01.389806    5884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 05:57:01.406884    5884 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 05:57:01.450176    5884 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 05:57:01.656381    5884 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 05:57:01.847532    5884 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 05:57:01.847830    5884 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 05:57:01.893639    5884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:57:02.079606    5884 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 05:57:04.817363    5884 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7376388s)
	I0719 05:57:04.828768    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 05:57:04.862804    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 05:57:04.897712    5884 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 05:57:05.110751    5884 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 05:57:05.304762    5884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:57:05.506812    5884 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 05:57:05.547891    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 05:57:05.581496    5884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:57:05.784626    5884 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 05:57:05.892286    5884 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 05:57:05.903666    5884 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 05:57:05.912341    5884 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0719 05:57:05.912417    5884 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0719 05:57:05.912417    5884 command_runner.go:130] > Device: 0,22	Inode: 849         Links: 1
	I0719 05:57:05.912417    5884 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0719 05:57:05.912535    5884 command_runner.go:130] > Access: 2024-07-19 05:57:05.827743742 +0000
	I0719 05:57:05.912535    5884 command_runner.go:130] > Modify: 2024-07-19 05:57:05.827743742 +0000
	I0719 05:57:05.912535    5884 command_runner.go:130] > Change: 2024-07-19 05:57:05.830743752 +0000
	I0719 05:57:05.912535    5884 command_runner.go:130] >  Birth: -
	I0719 05:57:05.912933    5884 start.go:563] Will wait 60s for crictl version
	I0719 05:57:05.924316    5884 ssh_runner.go:195] Run: which crictl
	I0719 05:57:05.930116    5884 command_runner.go:130] > /usr/bin/crictl
	I0719 05:57:05.942056    5884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 05:57:06.001987    5884 command_runner.go:130] > Version:  0.1.0
	I0719 05:57:06.002668    5884 command_runner.go:130] > RuntimeName:  docker
	I0719 05:57:06.002668    5884 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0719 05:57:06.002668    5884 command_runner.go:130] > RuntimeApiVersion:  v1
	I0719 05:57:06.002668    5884 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 05:57:06.011349    5884 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 05:57:06.047340    5884 command_runner.go:130] > 27.0.3
	I0719 05:57:06.057071    5884 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 05:57:06.090218    5884 command_runner.go:130] > 27.0.3
	I0719 05:57:06.095835    5884 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0719 05:57:06.096006    5884 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0719 05:57:06.099449    5884 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0719 05:57:06.099449    5884 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0719 05:57:06.099449    5884 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0719 05:57:06.099449    5884 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:e9:18 Flags:up|broadcast|multicast|running}
	I0719 05:57:06.102682    5884 ip.go:210] interface addr: fe80::1dc5:162d:cec2:b9bd/64
	I0719 05:57:06.102682    5884 ip.go:210] interface addr: 172.28.160.1/20
	I0719 05:57:06.112018    5884 ssh_runner.go:195] Run: grep 172.28.160.1	host.minikube.internal$ /etc/hosts
	I0719 05:57:06.118791    5884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 05:57:06.141010    5884 kubeadm.go:883] updating cluster {Name:multinode-761300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:multinode-761300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.162.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.167.151 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.165.227 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 05:57:06.141434    5884 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 05:57:06.149380    5884 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 05:57:06.172941    5884 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0719 05:57:06.172941    5884 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0719 05:57:06.172941    5884 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0719 05:57:06.172941    5884 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0719 05:57:06.172941    5884 command_runner.go:130] > kindest/kindnetd:v20240715-585640e9
	I0719 05:57:06.172941    5884 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0719 05:57:06.172941    5884 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0719 05:57:06.172941    5884 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0719 05:57:06.172941    5884 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 05:57:06.172941    5884 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0719 05:57:06.172941    5884 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0719 05:57:06.172941    5884 docker.go:615] Images already preloaded, skipping extraction
	I0719 05:57:06.180941    5884 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 05:57:06.206061    5884 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0719 05:57:06.206156    5884 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0719 05:57:06.206204    5884 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0719 05:57:06.206204    5884 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0719 05:57:06.206204    5884 command_runner.go:130] > kindest/kindnetd:v20240715-585640e9
	I0719 05:57:06.206204    5884 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0719 05:57:06.206204    5884 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0719 05:57:06.206204    5884 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0719 05:57:06.206204    5884 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 05:57:06.206204    5884 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0719 05:57:06.206204    5884 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0719 05:57:06.206204    5884 cache_images.go:84] Images are preloaded, skipping loading
	I0719 05:57:06.206204    5884 kubeadm.go:934] updating node { 172.28.162.149 8443 v1.30.3 docker true true} ...
	I0719 05:57:06.206204    5884 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-761300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.162.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-761300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 05:57:06.219163    5884 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 05:57:06.253142    5884 command_runner.go:130] > cgroupfs
	I0719 05:57:06.254150    5884 cni.go:84] Creating CNI manager for ""
	I0719 05:57:06.254150    5884 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 05:57:06.254150    5884 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 05:57:06.254150    5884 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.162.149 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-761300 NodeName:multinode-761300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.162.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.162.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 05:57:06.254150    5884 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.162.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-761300"
	  kubeletExtraArgs:
	    node-ip: 172.28.162.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.162.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 05:57:06.264135    5884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 05:57:06.284370    5884 command_runner.go:130] > kubeadm
	I0719 05:57:06.284370    5884 command_runner.go:130] > kubectl
	I0719 05:57:06.284370    5884 command_runner.go:130] > kubelet
	I0719 05:57:06.284370    5884 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 05:57:06.296526    5884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 05:57:06.313231    5884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0719 05:57:06.346776    5884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 05:57:06.379170    5884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0719 05:57:06.422338    5884 ssh_runner.go:195] Run: grep 172.28.162.149	control-plane.minikube.internal$ /etc/hosts
	I0719 05:57:06.428037    5884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.162.149	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 05:57:06.459893    5884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:57:06.649764    5884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 05:57:06.679700    5884 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300 for IP: 172.28.162.149
	I0719 05:57:06.679700    5884 certs.go:194] generating shared ca certs ...
	I0719 05:57:06.679700    5884 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:57:06.680692    5884 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0719 05:57:06.680692    5884 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0719 05:57:06.680692    5884 certs.go:256] generating profile certs ...
	I0719 05:57:06.681699    5884 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\client.key
	I0719 05:57:06.681699    5884 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key.f844f9b5
	I0719 05:57:06.681699    5884 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt.f844f9b5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.162.149]
	I0719 05:57:06.860967    5884 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt.f844f9b5 ...
	I0719 05:57:06.860967    5884 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt.f844f9b5: {Name:mk4dec42bb748b9416840ede947ad20260cdef70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:57:06.862282    5884 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key.f844f9b5 ...
	I0719 05:57:06.862282    5884 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key.f844f9b5: {Name:mk8888d555d0e90d859c52eb64eaa2d1defffc7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:57:06.863082    5884 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt.f844f9b5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt
	I0719 05:57:06.876221    5884 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key.f844f9b5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key
	I0719 05:57:06.876452    5884 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\proxy-client.key
	I0719 05:57:06.877520    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 05:57:06.877710    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0719 05:57:06.877956    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 05:57:06.877956    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 05:57:06.877956    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 05:57:06.877956    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 05:57:06.877956    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 05:57:06.877956    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 05:57:06.879151    5884 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem (1338 bytes)
	W0719 05:57:06.879479    5884 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604_empty.pem, impossibly tiny 0 bytes
	I0719 05:57:06.879479    5884 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0719 05:57:06.879888    5884 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0719 05:57:06.880125    5884 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0719 05:57:06.880125    5884 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0719 05:57:06.880864    5884 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem (1708 bytes)
	I0719 05:57:06.880864    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:57:06.881586    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem -> /usr/share/ca-certificates/9604.pem
	I0719 05:57:06.881586    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /usr/share/ca-certificates/96042.pem
	I0719 05:57:06.882836    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 05:57:06.932759    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 05:57:06.982409    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 05:57:07.035423    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 05:57:07.088709    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 05:57:07.136475    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 05:57:07.187229    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 05:57:07.234697    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 05:57:07.282826    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 05:57:07.334676    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem --> /usr/share/ca-certificates/9604.pem (1338 bytes)
	I0719 05:57:07.380706    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /usr/share/ca-certificates/96042.pem (1708 bytes)
	I0719 05:57:07.425333    5884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 05:57:07.469814    5884 ssh_runner.go:195] Run: openssl version
	I0719 05:57:07.477488    5884 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0719 05:57:07.488949    5884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 05:57:07.518453    5884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:57:07.525039    5884 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 19 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:57:07.525039    5884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:57:07.535029    5884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:57:07.543601    5884 command_runner.go:130] > b5213941
	I0719 05:57:07.553760    5884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 05:57:07.587513    5884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9604.pem && ln -fs /usr/share/ca-certificates/9604.pem /etc/ssl/certs/9604.pem"
	I0719 05:57:07.619449    5884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9604.pem
	I0719 05:57:07.627041    5884 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 19 03:46 /usr/share/ca-certificates/9604.pem
	I0719 05:57:07.627177    5884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:46 /usr/share/ca-certificates/9604.pem
	I0719 05:57:07.639051    5884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9604.pem
	I0719 05:57:07.649028    5884 command_runner.go:130] > 51391683
	I0719 05:57:07.661758    5884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9604.pem /etc/ssl/certs/51391683.0"
	I0719 05:57:07.693704    5884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96042.pem && ln -fs /usr/share/ca-certificates/96042.pem /etc/ssl/certs/96042.pem"
	I0719 05:57:07.723575    5884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96042.pem
	I0719 05:57:07.730741    5884 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 19 03:46 /usr/share/ca-certificates/96042.pem
	I0719 05:57:07.730741    5884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:46 /usr/share/ca-certificates/96042.pem
	I0719 05:57:07.742020    5884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96042.pem
	I0719 05:57:07.751157    5884 command_runner.go:130] > 3ec20f2e
	I0719 05:57:07.761641    5884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96042.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 05:57:07.793320    5884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 05:57:07.807103    5884 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 05:57:07.807103    5884 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0719 05:57:07.807103    5884 command_runner.go:130] > Device: 8,1	Inode: 6290258     Links: 1
	I0719 05:57:07.807103    5884 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 05:57:07.807103    5884 command_runner.go:130] > Access: 2024-07-19 05:32:50.038998983 +0000
	I0719 05:57:07.807103    5884 command_runner.go:130] > Modify: 2024-07-19 05:32:50.038998983 +0000
	I0719 05:57:07.807103    5884 command_runner.go:130] > Change: 2024-07-19 05:32:50.038998983 +0000
	I0719 05:57:07.807257    5884 command_runner.go:130] >  Birth: 2024-07-19 05:32:50.038998983 +0000
	I0719 05:57:07.820480    5884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 05:57:07.830552    5884 command_runner.go:130] > Certificate will not expire
	I0719 05:57:07.842273    5884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 05:57:07.855259    5884 command_runner.go:130] > Certificate will not expire
	I0719 05:57:07.867218    5884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 05:57:07.877558    5884 command_runner.go:130] > Certificate will not expire
	I0719 05:57:07.889242    5884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 05:57:07.899304    5884 command_runner.go:130] > Certificate will not expire
	I0719 05:57:07.911728    5884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 05:57:07.920732    5884 command_runner.go:130] > Certificate will not expire
	I0719 05:57:07.932948    5884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 05:57:07.941281    5884 command_runner.go:130] > Certificate will not expire
	I0719 05:57:07.941342    5884 kubeadm.go:392] StartCluster: {Name:multinode-761300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:multinode-761300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.162.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.167.151 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.165.227 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:57:07.951099    5884 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 05:57:07.987212    5884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 05:57:08.004024    5884 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0719 05:57:08.004024    5884 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0719 05:57:08.004024    5884 command_runner.go:130] > /var/lib/minikube/etcd:
	I0719 05:57:08.004593    5884 command_runner.go:130] > member
	I0719 05:57:08.005094    5884 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 05:57:08.005192    5884 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 05:57:08.015846    5884 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 05:57:08.034462    5884 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 05:57:08.035942    5884 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-761300" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 05:57:08.036650    5884 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-761300" cluster setting kubeconfig missing "multinode-761300" context setting]
	I0719 05:57:08.037586    5884 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:57:08.052628    5884 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 05:57:08.053515    5884 kapi.go:59] client config for multinode-761300: &rest.Config{Host:"https://172.28.162.149:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-761300/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-761300/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 05:57:08.055343    5884 cert_rotation.go:137] Starting client certificate rotation controller
	I0719 05:57:08.067708    5884 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 05:57:08.086154    5884 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0719 05:57:08.086154    5884 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0719 05:57:08.086154    5884 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0719 05:57:08.086154    5884 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0719 05:57:08.086315    5884 command_runner.go:130] >  kind: InitConfiguration
	I0719 05:57:08.086315    5884 command_runner.go:130] >  localAPIEndpoint:
	I0719 05:57:08.086315    5884 command_runner.go:130] > -  advertiseAddress: 172.28.162.16
	I0719 05:57:08.086315    5884 command_runner.go:130] > +  advertiseAddress: 172.28.162.149
	I0719 05:57:08.086315    5884 command_runner.go:130] >    bindPort: 8443
	I0719 05:57:08.086315    5884 command_runner.go:130] >  bootstrapTokens:
	I0719 05:57:08.086315    5884 command_runner.go:130] >    - groups:
	I0719 05:57:08.086315    5884 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0719 05:57:08.086315    5884 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0719 05:57:08.086315    5884 command_runner.go:130] >    name: "multinode-761300"
	I0719 05:57:08.086501    5884 command_runner.go:130] >    kubeletExtraArgs:
	I0719 05:57:08.086501    5884 command_runner.go:130] > -    node-ip: 172.28.162.16
	I0719 05:57:08.086501    5884 command_runner.go:130] > +    node-ip: 172.28.162.149
	I0719 05:57:08.086501    5884 command_runner.go:130] >    taints: []
	I0719 05:57:08.086501    5884 command_runner.go:130] >  ---
	I0719 05:57:08.086566    5884 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0719 05:57:08.086616    5884 command_runner.go:130] >  kind: ClusterConfiguration
	I0719 05:57:08.086690    5884 command_runner.go:130] >  apiServer:
	I0719 05:57:08.086747    5884 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.28.162.16"]
	I0719 05:57:08.086747    5884 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.28.162.149"]
	I0719 05:57:08.086747    5884 command_runner.go:130] >    extraArgs:
	I0719 05:57:08.086775    5884 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0719 05:57:08.086819    5884 command_runner.go:130] >  controllerManager:
	I0719 05:57:08.086886    5884 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.28.162.16
	+  advertiseAddress: 172.28.162.149
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-761300"
	   kubeletExtraArgs:
	-    node-ip: 172.28.162.16
	+    node-ip: 172.28.162.149
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.28.162.16"]
	+  certSANs: ["127.0.0.1", "localhost", "172.28.162.149"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0719 05:57:08.086933    5884 kubeadm.go:1160] stopping kube-system containers ...
	I0719 05:57:08.096093    5884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 05:57:08.126228    5884 command_runner.go:130] > 17479f193bde
	I0719 05:57:08.126326    5884 command_runner.go:130] > 7992ac3e3292
	I0719 05:57:08.126326    5884 command_runner.go:130] > 2db86aab06c2
	I0719 05:57:08.126326    5884 command_runner.go:130] > 8880cece050b
	I0719 05:57:08.126326    5884 command_runner.go:130] > 81297ef97ccf
	I0719 05:57:08.126326    5884 command_runner.go:130] > c7f3e45f7ac5
	I0719 05:57:08.126326    5884 command_runner.go:130] > 605bd6887ea9
	I0719 05:57:08.126411    5884 command_runner.go:130] > 342774c2cfe8
	I0719 05:57:08.126411    5884 command_runner.go:130] > 1e25c1f162f5
	I0719 05:57:08.126411    5884 command_runner.go:130] > 86b38e87981e
	I0719 05:57:08.126411    5884 command_runner.go:130] > d59292a30318
	I0719 05:57:08.126411    5884 command_runner.go:130] > d8ebf4b1a3d9
	I0719 05:57:08.126411    5884 command_runner.go:130] > b8966b015c45
	I0719 05:57:08.126411    5884 command_runner.go:130] > 20495b8d4837
	I0719 05:57:08.126411    5884 command_runner.go:130] > 9afe226cce24
	I0719 05:57:08.126476    5884 command_runner.go:130] > 44cdc617bc65
	I0719 05:57:08.126539    5884 docker.go:483] Stopping containers: [17479f193bde 7992ac3e3292 2db86aab06c2 8880cece050b 81297ef97ccf c7f3e45f7ac5 605bd6887ea9 342774c2cfe8 1e25c1f162f5 86b38e87981e d59292a30318 d8ebf4b1a3d9 b8966b015c45 20495b8d4837 9afe226cce24 44cdc617bc65]
	I0719 05:57:08.135388    5884 ssh_runner.go:195] Run: docker stop 17479f193bde 7992ac3e3292 2db86aab06c2 8880cece050b 81297ef97ccf c7f3e45f7ac5 605bd6887ea9 342774c2cfe8 1e25c1f162f5 86b38e87981e d59292a30318 d8ebf4b1a3d9 b8966b015c45 20495b8d4837 9afe226cce24 44cdc617bc65
	I0719 05:57:08.163424    5884 command_runner.go:130] > 17479f193bde
	I0719 05:57:08.163424    5884 command_runner.go:130] > 7992ac3e3292
	I0719 05:57:08.163424    5884 command_runner.go:130] > 2db86aab06c2
	I0719 05:57:08.163496    5884 command_runner.go:130] > 8880cece050b
	I0719 05:57:08.163496    5884 command_runner.go:130] > 81297ef97ccf
	I0719 05:57:08.163496    5884 command_runner.go:130] > c7f3e45f7ac5
	I0719 05:57:08.163496    5884 command_runner.go:130] > 605bd6887ea9
	I0719 05:57:08.163496    5884 command_runner.go:130] > 342774c2cfe8
	I0719 05:57:08.163496    5884 command_runner.go:130] > 1e25c1f162f5
	I0719 05:57:08.163496    5884 command_runner.go:130] > 86b38e87981e
	I0719 05:57:08.163496    5884 command_runner.go:130] > d59292a30318
	I0719 05:57:08.163496    5884 command_runner.go:130] > d8ebf4b1a3d9
	I0719 05:57:08.163496    5884 command_runner.go:130] > b8966b015c45
	I0719 05:57:08.163496    5884 command_runner.go:130] > 20495b8d4837
	I0719 05:57:08.163654    5884 command_runner.go:130] > 9afe226cce24
	I0719 05:57:08.163654    5884 command_runner.go:130] > 44cdc617bc65
	I0719 05:57:08.174904    5884 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 05:57:08.213199    5884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 05:57:08.231381    5884 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0719 05:57:08.231885    5884 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0719 05:57:08.231885    5884 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0719 05:57:08.231936    5884 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 05:57:08.232234    5884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 05:57:08.232329    5884 kubeadm.go:157] found existing configuration files:
	
	I0719 05:57:08.244669    5884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 05:57:08.263550    5884 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 05:57:08.263645    5884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 05:57:08.274740    5884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 05:57:08.304203    5884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 05:57:08.320371    5884 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 05:57:08.320421    5884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 05:57:08.331195    5884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 05:57:08.360031    5884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 05:57:08.376206    5884 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 05:57:08.376266    5884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 05:57:08.388028    5884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 05:57:08.415568    5884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 05:57:08.431490    5884 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 05:57:08.432368    5884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 05:57:08.443052    5884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 05:57:08.471580    5884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 05:57:08.500902    5884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 05:57:08.815246    5884 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 05:57:08.815246    5884 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0719 05:57:08.815246    5884 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0719 05:57:08.815246    5884 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 05:57:08.815246    5884 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0719 05:57:08.815246    5884 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0719 05:57:08.815246    5884 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0719 05:57:08.815354    5884 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0719 05:57:08.815354    5884 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0719 05:57:08.815354    5884 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 05:57:08.815466    5884 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 05:57:08.815466    5884 command_runner.go:130] > [certs] Using the existing "sa" key
	I0719 05:57:08.815528    5884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 05:57:10.014579    5884 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 05:57:10.014579    5884 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 05:57:10.014579    5884 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 05:57:10.014579    5884 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 05:57:10.014579    5884 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 05:57:10.014579    5884 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 05:57:10.014579    5884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1990362s)
	I0719 05:57:10.014579    5884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 05:57:10.330906    5884 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 05:57:10.330906    5884 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 05:57:10.330906    5884 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0719 05:57:10.330906    5884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 05:57:10.419159    5884 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 05:57:10.419955    5884 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 05:57:10.419955    5884 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 05:57:10.419955    5884 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 05:57:10.420069    5884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 05:57:10.551363    5884 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 05:57:10.552757    5884 api_server.go:52] waiting for apiserver process to appear ...
	I0719 05:57:10.565940    5884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:57:11.069679    5884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:57:11.574690    5884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:57:12.070556    5884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:57:12.580934    5884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:57:12.609000    5884 command_runner.go:130] > 1971
	I0719 05:57:12.609599    5884 api_server.go:72] duration metric: took 2.056866s to wait for apiserver process to appear ...
	I0719 05:57:12.609690    5884 api_server.go:88] waiting for apiserver healthz status ...
	I0719 05:57:12.609690    5884 api_server.go:253] Checking apiserver healthz at https://172.28.162.149:8443/healthz ...
	I0719 05:57:15.492826    5884 api_server.go:279] https://172.28.162.149:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 05:57:15.493473    5884 api_server.go:103] status: https://172.28.162.149:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 05:57:15.493473    5884 api_server.go:253] Checking apiserver healthz at https://172.28.162.149:8443/healthz ...
	I0719 05:57:15.529623    5884 api_server.go:279] https://172.28.162.149:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 05:57:15.529623    5884 api_server.go:103] status: https://172.28.162.149:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 05:57:15.619842    5884 api_server.go:253] Checking apiserver healthz at https://172.28.162.149:8443/healthz ...
	I0719 05:57:15.629806    5884 api_server.go:279] https://172.28.162.149:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:57:15.629806    5884 api_server.go:103] status: https://172.28.162.149:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:57:16.124853    5884 api_server.go:253] Checking apiserver healthz at https://172.28.162.149:8443/healthz ...
	I0719 05:57:16.133697    5884 api_server.go:279] https://172.28.162.149:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:57:16.133697    5884 api_server.go:103] status: https://172.28.162.149:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:57:16.610826    5884 api_server.go:253] Checking apiserver healthz at https://172.28.162.149:8443/healthz ...
	I0719 05:57:16.641800    5884 api_server.go:279] https://172.28.162.149:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:57:16.641800    5884 api_server.go:103] status: https://172.28.162.149:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:57:17.120724    5884 api_server.go:253] Checking apiserver healthz at https://172.28.162.149:8443/healthz ...
	I0719 05:57:17.128274    5884 api_server.go:279] https://172.28.162.149:8443/healthz returned 200:
	ok
	I0719 05:57:17.128274    5884 round_trippers.go:463] GET https://172.28.162.149:8443/version
	I0719 05:57:17.128274    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:17.128274    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:17.128274    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:17.140478    5884 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0719 05:57:17.140530    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:17.140530    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:17.140530    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:17.140530    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:17.140530    5884 round_trippers.go:580]     Content-Length: 263
	I0719 05:57:17.140530    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:17 GMT
	I0719 05:57:17.140530    5884 round_trippers.go:580]     Audit-Id: 9de0256c-9477-49c5-af84-d61c7c2056bd
	I0719 05:57:17.140530    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:17.140643    5884 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0719 05:57:17.140780    5884 api_server.go:141] control plane version: v1.30.3
	I0719 05:57:17.140855    5884 api_server.go:131] duration metric: took 4.5310343s to wait for apiserver health ...
	I0719 05:57:17.140855    5884 cni.go:84] Creating CNI manager for ""
	I0719 05:57:17.140855    5884 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 05:57:17.145288    5884 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0719 05:57:17.160458    5884 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0719 05:57:17.172590    5884 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0719 05:57:17.172696    5884 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0719 05:57:17.172696    5884 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0719 05:57:17.172696    5884 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 05:57:17.172696    5884 command_runner.go:130] > Access: 2024-07-19 05:55:40.547944400 +0000
	I0719 05:57:17.172696    5884 command_runner.go:130] > Modify: 2024-07-18 23:04:21.000000000 +0000
	I0719 05:57:17.172847    5884 command_runner.go:130] > Change: 2024-07-19 05:55:31.647000000 +0000
	I0719 05:57:17.172847    5884 command_runner.go:130] >  Birth: -
	I0719 05:57:17.173733    5884 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0719 05:57:17.173733    5884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0719 05:57:17.236238    5884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0719 05:57:18.790984    5884 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0719 05:57:18.791711    5884 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0719 05:57:18.791711    5884 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0719 05:57:18.791711    5884 command_runner.go:130] > daemonset.apps/kindnet configured
	I0719 05:57:18.791850    5884 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.5555935s)
	I0719 05:57:18.792029    5884 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 05:57:18.792460    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods
	I0719 05:57:18.792557    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:18.792557    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:18.792557    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:18.799163    5884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 05:57:18.799163    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:18.799163    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:18.799163    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:18.799163    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:18.799163    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:18 GMT
	I0719 05:57:18.799163    5884 round_trippers.go:580]     Audit-Id: 2d62b38e-4352-41fe-b558-1a503cc6dc45
	I0719 05:57:18.799163    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:18.801165    5884 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1882"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 88240 chars]
	I0719 05:57:18.807142    5884 system_pods.go:59] 12 kube-system pods found
	I0719 05:57:18.807142    5884 system_pods.go:61] "coredns-7db6d8ff4d-hw9kh" [d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 05:57:18.807142    5884 system_pods.go:61] "etcd-multinode-761300" [296a455d-9236-4939-b002-5fa6dd843880] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 05:57:18.807142    5884 system_pods.go:61] "kindnet-22ts9" [0d3c5a3b-fa22-4542-b9a5-478056ccc9cc] Running
	I0719 05:57:18.807142    5884 system_pods.go:61] "kindnet-6wxhn" [c0859b76-8ace-4de2-a940-4344594c5d27] Running
	I0719 05:57:18.807142    5884 system_pods.go:61] "kindnet-dj497" [124722d1-6c9c-4de4-b242-2f58e89b223b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0719 05:57:18.807142    5884 system_pods.go:61] "kube-apiserver-multinode-761300" [89d493c7-c827-467c-ae64-9cdb2b5061df] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 05:57:18.807142    5884 system_pods.go:61] "kube-controller-manager-multinode-761300" [2124834c-1961-49fb-8699-fba2fc5dd0ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 05:57:18.807142    5884 system_pods.go:61] "kube-proxy-c48b9" [67e2ee42-a2c4-4ed1-a2bf-840702a255b4] Running
	I0719 05:57:18.807142    5884 system_pods.go:61] "kube-proxy-c4z7f" [17ff8aac-2d57-44fb-a3ec-f0d6ea181881] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 05:57:18.807142    5884 system_pods.go:61] "kube-proxy-mjv8l" [4d0f7d34-4031-46d3-a580-a2d080d9d335] Running
	I0719 05:57:18.807142    5884 system_pods.go:61] "kube-scheduler-multinode-761300" [49a739d1-1ae3-4a41-aebc-0eb7b2b4f242] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 05:57:18.807142    5884 system_pods.go:61] "storage-provisioner" [87c864ea-0853-481c-ab24-2ab209760f69] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 05:57:18.807142    5884 system_pods.go:74] duration metric: took 15.113ms to wait for pod list to return data ...
	I0719 05:57:18.807142    5884 node_conditions.go:102] verifying NodePressure condition ...
	I0719 05:57:18.807142    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes
	I0719 05:57:18.807142    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:18.807142    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:18.807142    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:18.811495    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:18.811495    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:18.811495    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:18.811495    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:18.811495    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:18.811495    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:18.811495    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:18 GMT
	I0719 05:57:18.811495    5884 round_trippers.go:580]     Audit-Id: 02671730-14c8-4372-a07e-fed9482525db
	I0719 05:57:18.812151    5884 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1882"},"items":[{"metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16290 chars]
	I0719 05:57:18.813109    5884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 05:57:18.813109    5884 node_conditions.go:123] node cpu capacity is 2
	I0719 05:57:18.813109    5884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 05:57:18.813109    5884 node_conditions.go:123] node cpu capacity is 2
	I0719 05:57:18.813109    5884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 05:57:18.813109    5884 node_conditions.go:123] node cpu capacity is 2
	I0719 05:57:18.813109    5884 node_conditions.go:105] duration metric: took 5.9668ms to run NodePressure ...
	I0719 05:57:18.813109    5884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 05:57:19.055300    5884 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0719 05:57:19.152930    5884 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0719 05:57:19.154738    5884 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 05:57:19.155750    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0719 05:57:19.155750    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.155750    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.155750    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.160793    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:19.160793    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.160793    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.160793    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.160793    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.160793    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.160793    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.160793    5884 round_trippers.go:580]     Audit-Id: 4083154a-87b4-44c1-9990-96caec4db871
	I0719 05:57:19.161738    5884 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1884"},"items":[{"metadata":{"name":"etcd-multinode-761300","namespace":"kube-system","uid":"296a455d-9236-4939-b002-5fa6dd843880","resourceVersion":"1813","creationTimestamp":"2024-07-19T05:57:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.162.149:2379","kubernetes.io/config.hash":"581155a4bfbbdcf98e106c8ce8e86c2b","kubernetes.io/config.mirror":"581155a4bfbbdcf98e106c8ce8e86c2b","kubernetes.io/config.seen":"2024-07-19T05:57:10.588894693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:57:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30563 chars]
	I0719 05:57:19.163727    5884 kubeadm.go:739] kubelet initialised
	I0719 05:57:19.163727    5884 kubeadm.go:740] duration metric: took 7.9763ms waiting for restarted kubelet to initialise ...
	I0719 05:57:19.163727    5884 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:57:19.163727    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods
	I0719 05:57:19.163727    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.163727    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.163727    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.177908    5884 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0719 05:57:19.178000    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.178000    5884 round_trippers.go:580]     Audit-Id: 17151b11-a60a-469b-a29f-72e4627cf28c
	I0719 05:57:19.178000    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.178090    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.178090    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.178090    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.178090    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.179809    5884 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1884"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 88240 chars]
	I0719 05:57:19.184675    5884 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:19.184675    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:19.184675    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.184675    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.184675    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.187126    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:19.187126    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.187126    5884 round_trippers.go:580]     Audit-Id: dab98f20-8223-49e0-9e1f-34642024fe26
	I0719 05:57:19.187126    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.187126    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.187126    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.187126    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.187126    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.187126    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:19.188130    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:19.188130    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.188130    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.188130    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.195125    5884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 05:57:19.195125    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.195125    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.195125    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.195125    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.195125    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.195125    5884 round_trippers.go:580]     Audit-Id: 66c5aa30-e5d3-4f2f-aab0-1af898c8a4f6
	I0719 05:57:19.195125    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.196122    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:19.196122    5884 pod_ready.go:97] node "multinode-761300" hosting pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:19.196122    5884 pod_ready.go:81] duration metric: took 11.4468ms for pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace to be "Ready" ...
	E0719 05:57:19.196122    5884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-761300" hosting pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:19.196122    5884 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:19.196122    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-761300
	I0719 05:57:19.196122    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.196122    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.196122    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.200136    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:19.200164    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.200164    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.200164    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.200164    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.200164    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.200164    5884 round_trippers.go:580]     Audit-Id: 4810ddb7-492a-431d-a365-465b8975d528
	I0719 05:57:19.200164    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.200164    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-761300","namespace":"kube-system","uid":"296a455d-9236-4939-b002-5fa6dd843880","resourceVersion":"1813","creationTimestamp":"2024-07-19T05:57:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.162.149:2379","kubernetes.io/config.hash":"581155a4bfbbdcf98e106c8ce8e86c2b","kubernetes.io/config.mirror":"581155a4bfbbdcf98e106c8ce8e86c2b","kubernetes.io/config.seen":"2024-07-19T05:57:10.588894693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:57:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6395 chars]
	I0719 05:57:19.201092    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:19.201092    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.201092    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.201092    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.203789    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:19.203789    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.203789    5884 round_trippers.go:580]     Audit-Id: 63504cec-7f02-44bb-9d08-40515bc2db7b
	I0719 05:57:19.203789    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.203789    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.203789    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.203789    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.203789    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.203789    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:19.204754    5884 pod_ready.go:97] node "multinode-761300" hosting pod "etcd-multinode-761300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:19.204754    5884 pod_ready.go:81] duration metric: took 8.6311ms for pod "etcd-multinode-761300" in "kube-system" namespace to be "Ready" ...
	E0719 05:57:19.204754    5884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-761300" hosting pod "etcd-multinode-761300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:19.204754    5884 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:19.204754    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-761300
	I0719 05:57:19.204754    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.204754    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.204754    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.207759    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:19.207759    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.207759    5884 round_trippers.go:580]     Audit-Id: 88ee6419-9e67-4312-b883-a6cfc037cc52
	I0719 05:57:19.207759    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.207759    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.207759    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.207759    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.207759    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.207759    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-761300","namespace":"kube-system","uid":"89d493c7-c827-467c-ae64-9cdb2b5061df","resourceVersion":"1814","creationTimestamp":"2024-07-19T05:57:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.162.149:8443","kubernetes.io/config.hash":"b21ce007ca118b4c86324a165dd45eec","kubernetes.io/config.mirror":"b21ce007ca118b4c86324a165dd45eec","kubernetes.io/config.seen":"2024-07-19T05:57:10.501200307Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:57:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7949 chars]
	I0719 05:57:19.208525    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:19.208525    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.208525    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.208525    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.211131    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:19.211131    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.211131    5884 round_trippers.go:580]     Audit-Id: 1066d24f-ba52-4d3a-9a2d-7d5a5d84044b
	I0719 05:57:19.211131    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.211131    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.211131    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.211131    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.211131    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.212235    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:19.212696    5884 pod_ready.go:97] node "multinode-761300" hosting pod "kube-apiserver-multinode-761300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:19.212750    5884 pod_ready.go:81] duration metric: took 7.942ms for pod "kube-apiserver-multinode-761300" in "kube-system" namespace to be "Ready" ...
	E0719 05:57:19.212750    5884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-761300" hosting pod "kube-apiserver-multinode-761300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:19.212750    5884 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:19.212808    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-761300
	I0719 05:57:19.212808    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.212886    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.212909    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.216532    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:19.216532    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.216532    5884 round_trippers.go:580]     Audit-Id: 06242767-62d5-410c-a42d-97672ebc95c5
	I0719 05:57:19.216532    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.216532    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.216532    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.216532    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.216532    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.217137    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-761300","namespace":"kube-system","uid":"2124834c-1961-49fb-8699-fba2fc5dd0ac","resourceVersion":"1811","creationTimestamp":"2024-07-19T05:33:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"91d2984bea90586f6ba6d94e358920eb","kubernetes.io/config.mirror":"91d2984bea90586f6ba6d94e358920eb","kubernetes.io/config.seen":"2024-07-19T05:33:02.001207967Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0719 05:57:19.217446    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:19.217446    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.217446    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.217446    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.220031    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:19.220659    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.220659    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.220659    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.220659    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.220659    5884 round_trippers.go:580]     Audit-Id: 41f043a0-30ae-4579-a973-858f3ab325dd
	I0719 05:57:19.220659    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.220659    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.221033    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:19.221456    5884 pod_ready.go:97] node "multinode-761300" hosting pod "kube-controller-manager-multinode-761300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:19.221456    5884 pod_ready.go:81] duration metric: took 8.7057ms for pod "kube-controller-manager-multinode-761300" in "kube-system" namespace to be "Ready" ...
	E0719 05:57:19.221456    5884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-761300" hosting pod "kube-controller-manager-multinode-761300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:19.221456    5884 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c48b9" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:19.403970    5884 request.go:629] Waited for 182.1382ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c48b9
	I0719 05:57:19.404169    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c48b9
	I0719 05:57:19.404169    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.404280    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.404280    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.415617    5884 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0719 05:57:19.415617    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.415617    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.415893    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.415893    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.415893    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.415893    5884 round_trippers.go:580]     Audit-Id: 6b0b82cf-8a91-4377-b093-da0d5c860823
	I0719 05:57:19.415893    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.416395    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c48b9","generateName":"kube-proxy-","namespace":"kube-system","uid":"67e2ee42-a2c4-4ed1-a2bf-840702a255b4","resourceVersion":"1764","creationTimestamp":"2024-07-19T05:41:15Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06c026b7-a7b7-4276-a86c-fc9c51f31e4e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:41:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06c026b7-a7b7-4276-a86c-fc9c51f31e4e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0719 05:57:19.592845    5884 request.go:629] Waited for 175.4962ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/nodes/multinode-761300-m03
	I0719 05:57:19.593050    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300-m03
	I0719 05:57:19.593050    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.593050    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.593162    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.596579    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:19.596579    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.597497    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.597497    5884 round_trippers.go:580]     Audit-Id: ff5c41f5-1397-4082-9ecd-6ebc1b392b28
	I0719 05:57:19.597497    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.597497    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.597540    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.597540    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.597669    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m03","uid":"b19fd562-f462-4172-835f-56c42463b282","resourceVersion":"1773","creationTimestamp":"2024-07-19T05:52:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_52_28_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:52:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0719 05:57:19.598457    5884 pod_ready.go:97] node "multinode-761300-m03" hosting pod "kube-proxy-c48b9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300-m03" has status "Ready":"Unknown"
	I0719 05:57:19.598457    5884 pod_ready.go:81] duration metric: took 376.9964ms for pod "kube-proxy-c48b9" in "kube-system" namespace to be "Ready" ...
	E0719 05:57:19.598604    5884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-761300-m03" hosting pod "kube-proxy-c48b9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300-m03" has status "Ready":"Unknown"
	I0719 05:57:19.598604    5884 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c4z7f" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:19.795287    5884 request.go:629] Waited for 196.1408ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4z7f
	I0719 05:57:19.795484    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4z7f
	I0719 05:57:19.795484    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.795484    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.795575    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.798313    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:19.798313    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.798313    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.798313    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.798313    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.798313    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.798313    5884 round_trippers.go:580]     Audit-Id: ffdd68f1-b7c1-4795-bd74-8c1b90942533
	I0719 05:57:19.798313    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.799298    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c4z7f","generateName":"kube-proxy-","namespace":"kube-system","uid":"17ff8aac-2d57-44fb-a3ec-f0d6ea181881","resourceVersion":"1888","creationTimestamp":"2024-07-19T05:33:15Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06c026b7-a7b7-4276-a86c-fc9c51f31e4e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06c026b7-a7b7-4276-a86c-fc9c51f31e4e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0719 05:57:19.998798    5884 request.go:629] Waited for 198.3651ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:19.998798    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:19.998798    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.998798    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.999011    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:20.002442    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:20.002748    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:20.002748    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:20.002806    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:20.002806    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:20.002806    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:20.002806    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:20 GMT
	I0719 05:57:20.002806    5884 round_trippers.go:580]     Audit-Id: c5babcf1-fddc-453e-b36a-f9c8206749df
	I0719 05:57:20.003026    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:20.003789    5884 pod_ready.go:97] node "multinode-761300" hosting pod "kube-proxy-c4z7f" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:20.003789    5884 pod_ready.go:81] duration metric: took 405.1793ms for pod "kube-proxy-c4z7f" in "kube-system" namespace to be "Ready" ...
	E0719 05:57:20.003789    5884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-761300" hosting pod "kube-proxy-c4z7f" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:20.003789    5884 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mjv8l" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:20.202616    5884 request.go:629] Waited for 198.7356ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjv8l
	I0719 05:57:20.203267    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjv8l
	I0719 05:57:20.203267    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:20.203267    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:20.203267    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:20.206849    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:20.207289    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:20.207336    5884 round_trippers.go:580]     Audit-Id: 0df9e7b0-45d4-4c1d-95f5-7b45a6c27213
	I0719 05:57:20.207336    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:20.207336    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:20.207336    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:20.207336    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:20.207384    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:20 GMT
	I0719 05:57:20.207457    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mjv8l","generateName":"kube-proxy-","namespace":"kube-system","uid":"4d0f7d34-4031-46d3-a580-a2d080d9d335","resourceVersion":"1787","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06c026b7-a7b7-4276-a86c-fc9c51f31e4e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06c026b7-a7b7-4276-a86c-fc9c51f31e4e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0719 05:57:20.405218    5884 request.go:629] Waited for 196.8375ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:57:20.405523    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:57:20.405523    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:20.405523    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:20.405523    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:20.409432    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:20.409617    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:20.409617    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:20.409617    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:20.409617    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:20.409617    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:20 GMT
	I0719 05:57:20.409617    5884 round_trippers.go:580]     Audit-Id: 6d77f774-6239-4948-bb6f-553cb42185f0
	I0719 05:57:20.409617    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:20.410347    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"1789","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4486 chars]
	I0719 05:57:20.411843    5884 pod_ready.go:97] node "multinode-761300-m02" hosting pod "kube-proxy-mjv8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300-m02" has status "Ready":"Unknown"
	I0719 05:57:20.411894    5884 pod_ready.go:81] duration metric: took 408.1003ms for pod "kube-proxy-mjv8l" in "kube-system" namespace to be "Ready" ...
	E0719 05:57:20.411894    5884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-761300-m02" hosting pod "kube-proxy-mjv8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300-m02" has status "Ready":"Unknown"
	I0719 05:57:20.411894    5884 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:20.592484    5884 request.go:629] Waited for 180.4254ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-761300
	I0719 05:57:20.592777    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-761300
	I0719 05:57:20.592777    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:20.592980    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:20.592980    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:20.597110    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:20.597110    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:20.597110    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:20.597613    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:20.597613    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:20.597613    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:20 GMT
	I0719 05:57:20.597613    5884 round_trippers.go:580]     Audit-Id: 6c13160b-6dff-4a59-9614-a2e00b682068
	I0719 05:57:20.597613    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:20.598366    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-761300","namespace":"kube-system","uid":"49a739d1-1ae3-4a41-aebc-0eb7b2b4f242","resourceVersion":"1812","creationTimestamp":"2024-07-19T05:33:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"baa57cf06d1c9cb3264d7de745e86d00","kubernetes.io/config.mirror":"baa57cf06d1c9cb3264d7de745e86d00","kubernetes.io/config.seen":"2024-07-19T05:33:02.001209067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0719 05:57:20.798280    5884 request.go:629] Waited for 198.9027ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:20.798397    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:20.798397    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:20.798397    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:20.798702    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:20.802031    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:20.802905    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:20.802905    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:20.802905    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:20.802905    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:20 GMT
	I0719 05:57:20.802905    5884 round_trippers.go:580]     Audit-Id: 07248fc1-ebaa-4f45-9f4c-7a02680793e2
	I0719 05:57:20.802905    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:20.802905    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:20.803658    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:20.804287    5884 pod_ready.go:97] node "multinode-761300" hosting pod "kube-scheduler-multinode-761300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:20.804351    5884 pod_ready.go:81] duration metric: took 392.3845ms for pod "kube-scheduler-multinode-761300" in "kube-system" namespace to be "Ready" ...
	E0719 05:57:20.804456    5884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-761300" hosting pod "kube-scheduler-multinode-761300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:20.804456    5884 pod_ready.go:38] duration metric: took 1.6407091s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:57:20.804456    5884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 05:57:20.827115    5884 command_runner.go:130] > -16
	I0719 05:57:20.827299    5884 ops.go:34] apiserver oom_adj: -16
	I0719 05:57:20.827299    5884 kubeadm.go:597] duration metric: took 12.821865s to restartPrimaryControlPlane
	I0719 05:57:20.827299    5884 kubeadm.go:394] duration metric: took 12.8858005s to StartCluster
	I0719 05:57:20.827299    5884 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:57:20.827299    5884 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 05:57:20.830399    5884 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:57:20.832063    5884 start.go:235] Will wait 6m0s for node &{Name: IP:172.28.162.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 05:57:20.832063    5884 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 05:57:20.832710    5884 config.go:182] Loaded profile config "multinode-761300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:57:20.836580    5884 out.go:177] * Enabled addons: 
	I0719 05:57:20.844211    5884 out.go:177] * Verifying Kubernetes components...
	I0719 05:57:20.848339    5884 addons.go:510] duration metric: took 16.2757ms for enable addons: enabled=[]
	I0719 05:57:20.857811    5884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:57:21.147637    5884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 05:57:21.174019    5884 node_ready.go:35] waiting up to 6m0s for node "multinode-761300" to be "Ready" ...
	I0719 05:57:21.174086    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:21.174086    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:21.174086    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:21.174086    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:21.174795    5884 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 05:57:21.174795    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:21.178062    5884 round_trippers.go:580]     Audit-Id: a61d494e-ae1c-4976-bca1-94bbbfec8722
	I0719 05:57:21.178062    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:21.178062    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:21.178093    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:21.178093    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:21.178111    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:21 GMT
	I0719 05:57:21.178616    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:21.684584    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:21.684584    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:21.684584    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:21.684584    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:21.689482    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:21.689746    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:21.689746    5884 round_trippers.go:580]     Audit-Id: bf768ffd-ce34-488e-a8e2-890c21ce5cc9
	I0719 05:57:21.689746    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:21.689845    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:21.689845    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:21.689873    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:21.689873    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:21 GMT
	I0719 05:57:21.690116    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:22.184802    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:22.184802    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:22.184802    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:22.184802    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:22.189464    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:22.189464    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:22.189464    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:22 GMT
	I0719 05:57:22.189464    5884 round_trippers.go:580]     Audit-Id: 1904384c-5956-4842-9573-48c9351f8afd
	I0719 05:57:22.189464    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:22.189464    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:22.189464    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:22.189464    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:22.189464    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:22.683677    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:22.683677    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:22.683677    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:22.683677    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:22.686252    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:22.687186    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:22.687186    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:22 GMT
	I0719 05:57:22.687243    5884 round_trippers.go:580]     Audit-Id: 6a2c8282-9c58-464f-b07d-3330bb1baaa2
	I0719 05:57:22.687243    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:22.687243    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:22.687243    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:22.687243    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:22.687615    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:23.181443    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:23.181768    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:23.181768    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:23.181861    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:23.186508    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:23.187167    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:23.187167    5884 round_trippers.go:580]     Audit-Id: 34c042c7-0948-4810-a940-daa26fc77eb7
	I0719 05:57:23.187167    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:23.187167    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:23.187167    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:23.187167    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:23.187167    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:23 GMT
	I0719 05:57:23.187167    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:23.188031    5884 node_ready.go:53] node "multinode-761300" has status "Ready":"False"
	I0719 05:57:23.679175    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:23.679225    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:23.679225    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:23.679225    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:23.683812    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:23.684520    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:23.684520    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:23.684520    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:23 GMT
	I0719 05:57:23.684520    5884 round_trippers.go:580]     Audit-Id: 49fa6658-d7b7-4404-838f-4e049df09a0b
	I0719 05:57:23.684520    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:23.684520    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:23.684520    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:23.684520    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:24.177176    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:24.177176    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:24.177176    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:24.177176    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:24.188554    5884 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0719 05:57:24.188554    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:24.189087    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:24.189087    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:24.189087    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:24.189087    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:24 GMT
	I0719 05:57:24.189087    5884 round_trippers.go:580]     Audit-Id: 2f5f902c-14d8-4cf4-984c-f8234346aebc
	I0719 05:57:24.189087    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:24.189340    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:24.679252    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:24.679648    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:24.679648    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:24.679648    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:24.683310    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:24.683310    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:24.683310    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:24 GMT
	I0719 05:57:24.683310    5884 round_trippers.go:580]     Audit-Id: ea5e3670-a29d-47aa-b415-00a207ac7e58
	I0719 05:57:24.683505    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:24.683505    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:24.683505    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:24.683505    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:24.684163    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:25.177501    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:25.177501    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:25.177501    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:25.177501    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:25.181146    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:25.181503    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:25.181503    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:25.181503    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:25.181503    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:25 GMT
	I0719 05:57:25.181608    5884 round_trippers.go:580]     Audit-Id: fae9ad74-a2da-4327-9a65-94bd94bb0271
	I0719 05:57:25.181608    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:25.181608    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:25.182037    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:25.675652    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:25.675652    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:25.675652    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:25.675652    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:25.679252    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:25.679252    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:25.679252    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:25.679252    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:25.679252    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:25.679252    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:25.679252    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:25 GMT
	I0719 05:57:25.679252    5884 round_trippers.go:580]     Audit-Id: 8aa3bb72-2614-4667-a409-bed6ac78cd2d
	I0719 05:57:25.679577    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:25.680347    5884 node_ready.go:53] node "multinode-761300" has status "Ready":"False"
	I0719 05:57:26.174934    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:26.174934    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:26.174934    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:26.175044    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:26.179360    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:26.179489    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:26.179489    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:26 GMT
	I0719 05:57:26.179489    5884 round_trippers.go:580]     Audit-Id: 2f4fc3c6-106b-476d-8b72-0b1e466ccb70
	I0719 05:57:26.179489    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:26.179489    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:26.179489    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:26.179489    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:26.179766    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:26.675711    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:26.675711    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:26.675711    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:26.675711    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:26.679673    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:26.679857    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:26.679857    5884 round_trippers.go:580]     Audit-Id: a6055a1a-000b-4d75-b044-130baf0a7423
	I0719 05:57:26.679857    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:26.679857    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:26.679857    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:26.679857    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:26.679857    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:26 GMT
	I0719 05:57:26.680194    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:27.189058    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:27.189310    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:27.189310    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:27.189310    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:27.200018    5884 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0719 05:57:27.200996    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:27.200996    5884 round_trippers.go:580]     Audit-Id: 1253aa49-8dc0-4f27-bc11-d65e285499fd
	I0719 05:57:27.200996    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:27.201042    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:27.201042    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:27.201042    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:27.201042    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:27 GMT
	I0719 05:57:27.202346    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:27.675550    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:27.675684    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:27.675684    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:27.675684    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:27.679135    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:27.679932    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:27.679932    5884 round_trippers.go:580]     Audit-Id: 812a0eb9-4912-43cd-b534-b1704f28b62a
	I0719 05:57:27.679932    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:27.679932    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:27.679932    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:27.679932    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:27.679932    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:27 GMT
	I0719 05:57:27.680347    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:27.680834    5884 node_ready.go:53] node "multinode-761300" has status "Ready":"False"
	I0719 05:57:28.174776    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:28.174776    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:28.174867    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:28.174867    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:28.178673    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:28.178673    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:28.179044    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:28.179044    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:28.179044    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:28.179044    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:28 GMT
	I0719 05:57:28.179044    5884 round_trippers.go:580]     Audit-Id: 483a3fd1-678e-46e8-b070-5109240038d9
	I0719 05:57:28.179044    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:28.179264    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:28.682355    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:28.682355    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:28.682355    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:28.682355    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:28.685964    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:28.685964    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:28.685964    5884 round_trippers.go:580]     Audit-Id: d31bceb4-bc9e-4b03-a367-4fb3307f3ea2
	I0719 05:57:28.686266    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:28.686266    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:28.686266    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:28.686266    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:28.686266    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:28 GMT
	I0719 05:57:28.686407    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:29.175033    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:29.175033    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:29.175033    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:29.175033    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:29.179087    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:29.179087    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:29.179087    5884 round_trippers.go:580]     Audit-Id: 3e0ba758-175e-44b6-8bad-22d9814fda9f
	I0719 05:57:29.179087    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:29.179087    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:29.179087    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:29.179087    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:29.179210    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:29 GMT
	I0719 05:57:29.179436    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1918","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0719 05:57:29.684585    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:29.684585    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:29.684585    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:29.684664    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:29.687912    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:29.688712    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:29.688712    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:29 GMT
	I0719 05:57:29.688712    5884 round_trippers.go:580]     Audit-Id: 6d0c4b6d-01d4-4187-b540-a38b21aff691
	I0719 05:57:29.688712    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:29.688712    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:29.688712    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:29.688817    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:29.689157    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1918","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0719 05:57:29.689705    5884 node_ready.go:53] node "multinode-761300" has status "Ready":"False"
	I0719 05:57:30.182215    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:30.182215    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:30.182215    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:30.182215    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:30.185939    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:30.186542    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:30.186542    5884 round_trippers.go:580]     Audit-Id: 8231ac2e-ac5b-403b-9b45-aa3fff4c6bc2
	I0719 05:57:30.186542    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:30.186542    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:30.186542    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:30.186542    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:30.186542    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:30 GMT
	I0719 05:57:30.186542    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1918","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0719 05:57:30.682945    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:30.683005    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:30.683005    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:30.683005    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:30.685474    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:30.685474    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:30.685474    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:30 GMT
	I0719 05:57:30.685474    5884 round_trippers.go:580]     Audit-Id: fc5ee575-ce03-487a-8022-c853c779625e
	I0719 05:57:30.686340    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:30.686340    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:30.686340    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:30.686340    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:30.686492    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1918","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0719 05:57:31.187307    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:31.187307    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:31.187406    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:31.187406    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:31.191781    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:31.191946    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:31.191946    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:31.191946    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:31.191946    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:31.191946    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:31.191946    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:31 GMT
	I0719 05:57:31.191946    5884 round_trippers.go:580]     Audit-Id: 7f123028-232c-44fb-8e6e-b160e9feac5d
	I0719 05:57:31.192168    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:31.193066    5884 node_ready.go:49] node "multinode-761300" has status "Ready":"True"
	I0719 05:57:31.193066    5884 node_ready.go:38] duration metric: took 10.0189245s for node "multinode-761300" to be "Ready" ...
	I0719 05:57:31.193190    5884 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:57:31.193308    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods
	I0719 05:57:31.193308    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:31.193394    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:31.193394    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:31.203640    5884 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0719 05:57:31.203640    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:31.203640    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:31 GMT
	I0719 05:57:31.203640    5884 round_trippers.go:580]     Audit-Id: afdc21f1-569c-4da3-a2a2-eda37222cf04
	I0719 05:57:31.203640    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:31.203640    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:31.203640    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:31.203640    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:31.205924    5884 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1929"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86673 chars]
	I0719 05:57:31.210414    5884 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:31.210642    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:31.210642    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:31.210715    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:31.210715    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:31.221855    5884 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0719 05:57:31.221855    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:31.221855    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:31 GMT
	I0719 05:57:31.221855    5884 round_trippers.go:580]     Audit-Id: c034cc9f-21e1-4df4-b41d-2818b691d5ff
	I0719 05:57:31.221855    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:31.221855    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:31.221855    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:31.221855    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:31.221855    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:31.222877    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:31.222877    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:31.222877    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:31.222877    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:31.226921    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:31.227396    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:31.227396    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:31 GMT
	I0719 05:57:31.227396    5884 round_trippers.go:580]     Audit-Id: 8cadac84-e110-4df6-bbd7-5b10af75aebc
	I0719 05:57:31.227396    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:31.227396    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:31.227396    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:31.227396    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:31.227532    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:31.718875    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:31.718949    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:31.718949    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:31.718949    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:31.723538    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:31.724632    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:31.724632    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:31.724632    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:31 GMT
	I0719 05:57:31.724632    5884 round_trippers.go:580]     Audit-Id: 290876c2-1840-41c2-ad1c-705fc628799d
	I0719 05:57:31.724632    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:31.724632    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:31.724632    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:31.724995    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:31.726040    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:31.726095    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:31.726095    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:31.726095    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:31.729630    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:31.729630    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:31.729630    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:31.729630    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:31.729630    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:31.730050    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:31 GMT
	I0719 05:57:31.730050    5884 round_trippers.go:580]     Audit-Id: 23606792-e254-43e4-92cb-4f2215cfa416
	I0719 05:57:31.730050    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:31.730404    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:32.217451    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:32.217628    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:32.217628    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:32.217628    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:32.220514    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:32.221549    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:32.221549    5884 round_trippers.go:580]     Audit-Id: c2357e2d-c721-4d32-8a87-76efad274056
	I0719 05:57:32.221549    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:32.221549    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:32.221549    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:32.221549    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:32.221549    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:32 GMT
	I0719 05:57:32.221803    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:32.222674    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:32.222764    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:32.222764    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:32.222764    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:32.225044    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:32.225812    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:32.225812    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:32.225812    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:32.225812    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:32.225812    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:32.225812    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:32 GMT
	I0719 05:57:32.225812    5884 round_trippers.go:580]     Audit-Id: bd205e2f-5958-4edd-9d86-090fff220c47
	I0719 05:57:32.226226    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:32.718409    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:32.718409    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:32.718409    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:32.718409    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:32.723579    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:32.723579    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:32.723579    5884 round_trippers.go:580]     Audit-Id: 205d375f-e65a-4a91-ab54-3ea9e43de3f1
	I0719 05:57:32.723579    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:32.723579    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:32.723579    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:32.724221    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:32.724221    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:32 GMT
	I0719 05:57:32.724348    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:32.725099    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:32.725099    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:32.725099    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:32.725099    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:32.728982    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:32.728982    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:32.728982    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:32.728982    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:32.728982    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:32.728982    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:32 GMT
	I0719 05:57:32.728982    5884 round_trippers.go:580]     Audit-Id: b4bbf629-008a-444c-98a5-897a92ec0b2d
	I0719 05:57:32.728982    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:32.729869    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:33.220753    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:33.220753    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:33.220860    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:33.220860    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:33.224810    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:33.225820    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:33.225843    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:33 GMT
	I0719 05:57:33.225843    5884 round_trippers.go:580]     Audit-Id: fe947b58-154c-4fc1-83bc-fd9ada9e33f6
	I0719 05:57:33.225843    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:33.225843    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:33.225843    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:33.225843    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:33.226651    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:33.227961    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:33.227961    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:33.227961    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:33.227961    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:33.232790    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:33.232790    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:33.232886    5884 round_trippers.go:580]     Audit-Id: 7b559462-ae38-42f1-adcb-7e5962fc1b0e
	I0719 05:57:33.232886    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:33.232886    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:33.232886    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:33.232886    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:33.232886    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:33 GMT
	I0719 05:57:33.233045    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:33.233592    5884 pod_ready.go:102] pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace has status "Ready":"False"
	I0719 05:57:33.718183    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:33.718266    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:33.718266    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:33.718320    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:33.724505    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:33.724505    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:33.724505    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:33.724505    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:33.724505    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:33.724505    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:33.724505    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:33 GMT
	I0719 05:57:33.724590    5884 round_trippers.go:580]     Audit-Id: bc7c878d-41a3-4d8c-b58f-23d47b8d3dad
	I0719 05:57:33.724660    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:33.725810    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:33.725836    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:33.725836    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:33.725836    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:33.728833    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:33.728833    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:33.729494    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:33.729494    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:33.729494    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:33.729494    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:33 GMT
	I0719 05:57:33.729617    5884 round_trippers.go:580]     Audit-Id: dcd65a7c-36b1-475c-af24-a2239865f663
	I0719 05:57:33.729617    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:33.730104    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:34.218199    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:34.218199    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:34.218291    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:34.218291    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:34.222693    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:34.223406    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:34.223406    5884 round_trippers.go:580]     Audit-Id: 5e4cdc02-8203-4140-ac90-2e7ed612d426
	I0719 05:57:34.223406    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:34.223406    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:34.223495    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:34.223495    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:34.223495    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:34 GMT
	I0719 05:57:34.223823    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:34.224781    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:34.224835    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:34.224835    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:34.224835    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:34.228013    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:34.228013    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:34.228013    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:34.228091    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:34 GMT
	I0719 05:57:34.228091    5884 round_trippers.go:580]     Audit-Id: 7f5f8a80-87c9-428a-9d06-850af648a0d6
	I0719 05:57:34.228091    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:34.228091    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:34.228091    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:34.228145    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:34.721308    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:34.721372    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:34.721372    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:34.721372    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:34.725410    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:34.725613    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:34.725613    5884 round_trippers.go:580]     Audit-Id: ec7e1a3d-9e85-428f-ac20-20aa2889bd24
	I0719 05:57:34.725613    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:34.725613    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:34.725613    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:34.725613    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:34.725613    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:34 GMT
	I0719 05:57:34.725856    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:34.726661    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:34.726720    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:34.726720    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:34.726720    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:34.729658    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:34.730603    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:34.730703    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:34 GMT
	I0719 05:57:34.730703    5884 round_trippers.go:580]     Audit-Id: 25467012-cfda-4144-aaa1-bea5e09d2e54
	I0719 05:57:34.730703    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:34.730703    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:34.730703    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:34.730743    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:34.730857    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:35.218593    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:35.218593    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:35.218593    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:35.218593    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:35.224603    5884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 05:57:35.224603    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:35.224745    5884 round_trippers.go:580]     Audit-Id: e67ea614-bfb9-4aa3-a56a-f0ffe9dcae8e
	I0719 05:57:35.224745    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:35.224745    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:35.224745    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:35.224806    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:35.224806    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:35 GMT
	I0719 05:57:35.224806    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:35.225932    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:35.225932    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:35.225981    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:35.225981    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:35.228616    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:35.228616    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:35.228616    5884 round_trippers.go:580]     Audit-Id: 00102d7d-d4f0-4040-b120-6f6e2516f6d5
	I0719 05:57:35.228616    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:35.228616    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:35.228616    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:35.228616    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:35.228616    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:35 GMT
	I0719 05:57:35.229565    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:35.716893    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:35.716985    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:35.716985    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:35.717095    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:35.720883    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:35.720883    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:35.720883    5884 round_trippers.go:580]     Audit-Id: f430d950-84ad-497e-a548-7cf568ee616b
	I0719 05:57:35.720883    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:35.721780    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:35.721780    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:35.721780    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:35.721780    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:35 GMT
	I0719 05:57:35.721955    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:35.723044    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:35.723044    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:35.723044    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:35.723044    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:35.726441    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:35.726441    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:35.726441    5884 round_trippers.go:580]     Audit-Id: cb4d1dea-e606-4597-9172-d9d4cea8884a
	I0719 05:57:35.726441    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:35.726676    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:35.726676    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:35.726676    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:35.726676    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:35 GMT
	I0719 05:57:35.727050    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:35.727120    5884 pod_ready.go:102] pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace has status "Ready":"False"
	I0719 05:57:36.222276    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:36.222276    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:36.222276    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:36.222276    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:36.226349    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:36.226349    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:36.226349    5884 round_trippers.go:580]     Audit-Id: ec013313-b8ac-42fd-a8b4-efc2c183045f
	I0719 05:57:36.226455    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:36.226455    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:36.226455    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:36.226455    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:36.226455    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:36 GMT
	I0719 05:57:36.226585    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:36.227327    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:36.227412    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:36.227684    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:36.227684    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:36.230899    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:36.230899    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:36.230899    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:36.230899    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:36.230899    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:36 GMT
	I0719 05:57:36.230899    5884 round_trippers.go:580]     Audit-Id: 21afca9e-83c6-4d0c-b0cb-2a3bd4b3fa83
	I0719 05:57:36.230899    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:36.230899    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:36.231473    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:36.721222    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:36.721222    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:36.721222    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:36.721222    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:36.726845    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:36.726941    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:36.726941    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:36.726941    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:36.726941    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:36 GMT
	I0719 05:57:36.726941    5884 round_trippers.go:580]     Audit-Id: 90a8a302-e50f-4a0c-97e4-aae9a0ffe7b9
	I0719 05:57:36.727036    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:36.727036    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:36.727790    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:36.728531    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:36.728637    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:36.728637    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:36.728637    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:36.730592    5884 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 05:57:36.730592    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:36.731649    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:36.731649    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:36.731649    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:36.731649    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:36 GMT
	I0719 05:57:36.731649    5884 round_trippers.go:580]     Audit-Id: d938666a-a613-4ef5-a33a-838290815684
	I0719 05:57:36.731649    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:36.732628    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:37.224485    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:37.224485    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:37.224485    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:37.224485    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:37.228083    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:37.228663    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:37.228663    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:37.228663    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:37.228663    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:37.228663    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:37 GMT
	I0719 05:57:37.228663    5884 round_trippers.go:580]     Audit-Id: 75f9ce97-11a1-40ee-b5fe-19d2d727c920
	I0719 05:57:37.228663    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:37.229134    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:37.229993    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:37.229993    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:37.229993    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:37.229993    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:37.232584    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:37.232584    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:37.233085    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:37 GMT
	I0719 05:57:37.233085    5884 round_trippers.go:580]     Audit-Id: 30ee913b-1644-43b2-a901-b242b3bd7063
	I0719 05:57:37.233085    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:37.233085    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:37.233085    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:37.233085    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:37.234241    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:37.710998    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:37.711204    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:37.711257    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:37.711257    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:37.716653    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:37.716653    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:37.716653    5884 round_trippers.go:580]     Audit-Id: 7533e1ef-b110-4af8-916c-7b6e8aff2a58
	I0719 05:57:37.716653    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:37.716653    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:37.716653    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:37.717197    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:37.717197    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:37 GMT
	I0719 05:57:37.718166    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:37.719418    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:37.719418    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:37.719418    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:37.719418    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:37.723407    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:37.723407    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:37.723407    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:37 GMT
	I0719 05:57:37.724053    5884 round_trippers.go:580]     Audit-Id: e9abd70c-d03f-44d0-8797-871de63ff944
	I0719 05:57:37.724053    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:37.724053    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:37.724053    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:37.724104    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:37.725119    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:38.211329    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:38.211415    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:38.211415    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:38.211415    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:38.214866    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:38.215885    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:38.215993    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:38.215993    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:38.215993    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:38.215993    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:38.215993    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:38 GMT
	I0719 05:57:38.216069    5884 round_trippers.go:580]     Audit-Id: f462f7f8-0783-44a2-8510-2ddd6e34e754
	I0719 05:57:38.217134    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:38.218039    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:38.218039    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:38.218039    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:38.218039    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:38.220395    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:38.220395    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:38.220395    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:38.220395    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:38.221124    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:38.221124    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:38.221124    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:38 GMT
	I0719 05:57:38.221124    5884 round_trippers.go:580]     Audit-Id: 66cf962d-f577-430e-b0a5-336d805b6155
	I0719 05:57:38.221359    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:38.222023    5884 pod_ready.go:102] pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace has status "Ready":"False"
	I0719 05:57:38.712236    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:38.712460    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:38.712460    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:38.712460    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:38.716872    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:38.716872    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:38.716872    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:38.716872    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:38.716872    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:38.716872    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:38 GMT
	I0719 05:57:38.716872    5884 round_trippers.go:580]     Audit-Id: 8333fb72-d0c9-4ddd-9c4e-4846235c9cc8
	I0719 05:57:38.717421    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:38.717566    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:38.718969    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:38.718969    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:38.718969    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:38.718969    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:38.721764    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:38.722674    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:38.722674    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:38.722674    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:38.722674    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:38 GMT
	I0719 05:57:38.722674    5884 round_trippers.go:580]     Audit-Id: 92563dc0-ff26-4257-bd6c-395f6de67496
	I0719 05:57:38.722674    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:38.722674    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:38.722674    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:39.213271    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:39.213347    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:39.213347    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:39.213347    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:39.217802    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:39.217852    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:39.217852    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:39.217852    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:39.217852    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:39.217852    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:39.217852    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:39 GMT
	I0719 05:57:39.217852    5884 round_trippers.go:580]     Audit-Id: ac24a718-86c1-48d7-bc98-1e2a61c39cf9
	I0719 05:57:39.217985    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:39.219037    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:39.219228    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:39.219414    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:39.219450    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:39.222605    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:39.222605    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:39.222605    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:39.222605    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:39.222605    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:39 GMT
	I0719 05:57:39.222605    5884 round_trippers.go:580]     Audit-Id: 3277de12-ba3a-4fc0-a044-387de48d7b9c
	I0719 05:57:39.223430    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:39.223430    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:39.223788    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:39.712350    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:39.712445    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:39.712474    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:39.712474    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:39.718227    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:39.718290    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:39.718290    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:39.718290    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:39.718290    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:39 GMT
	I0719 05:57:39.718290    5884 round_trippers.go:580]     Audit-Id: 148ae5ce-ec19-4675-a258-8f01f2988bcc
	I0719 05:57:39.718290    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:39.718290    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:39.718290    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:39.719129    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:39.719129    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:39.719129    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:39.719129    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:39.722792    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:39.722792    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:39.722792    5884 round_trippers.go:580]     Audit-Id: e94ffbf6-71cd-43b4-94a8-37a720af667d
	I0719 05:57:39.722792    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:39.722792    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:39.722890    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:39.722890    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:39.722890    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:39 GMT
	I0719 05:57:39.723182    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:40.225141    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:40.225141    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:40.225282    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:40.225282    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:40.229686    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:40.229782    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:40.229782    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:40.229782    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:40.229782    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:40 GMT
	I0719 05:57:40.229782    5884 round_trippers.go:580]     Audit-Id: cc6906cf-8135-4ad8-af53-e66fb84f3d10
	I0719 05:57:40.229782    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:40.229782    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:40.229782    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:40.230962    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:40.230962    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:40.231021    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:40.231021    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:40.234334    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:40.234334    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:40.234334    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:40.234334    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:40 GMT
	I0719 05:57:40.234334    5884 round_trippers.go:580]     Audit-Id: c130b00f-dc17-4000-a096-0fc34bfffd9f
	I0719 05:57:40.234334    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:40.234334    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:40.234334    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:40.235122    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:40.235592    5884 pod_ready.go:102] pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace has status "Ready":"False"
	I0719 05:57:40.725829    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:40.725922    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:40.725922    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:40.725922    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:40.729368    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:40.730109    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:40.730109    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:40.730109    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:40 GMT
	I0719 05:57:40.730109    5884 round_trippers.go:580]     Audit-Id: a4823366-fc92-43b7-bf67-e56c7366c554
	I0719 05:57:40.730250    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:40.730250    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:40.730250    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:40.730436    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:40.731254    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:40.731254    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:40.731309    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:40.731309    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:40.733717    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:40.733717    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:40.733717    5884 round_trippers.go:580]     Audit-Id: 61341f82-a17f-46f7-925f-2cdb18d69e23
	I0719 05:57:40.733717    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:40.733717    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:40.733717    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:40.733717    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:40.733717    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:40 GMT
	I0719 05:57:40.734573    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:41.217223    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:41.217223    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:41.217223    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:41.217223    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:41.221805    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:41.222344    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:41.222344    5884 round_trippers.go:580]     Audit-Id: cd65b363-0234-4a37-a8e1-54a2dd137696
	I0719 05:57:41.222344    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:41.222344    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:41.222344    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:41.222344    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:41.222344    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:41 GMT
	I0719 05:57:41.222656    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:41.223247    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:41.223842    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:41.223842    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:41.223842    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:41.228132    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:41.228303    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:41.228303    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:41.228303    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:41 GMT
	I0719 05:57:41.228303    5884 round_trippers.go:580]     Audit-Id: 4ceee7b3-6328-4073-8bd1-cbc057fd3c55
	I0719 05:57:41.228303    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:41.228303    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:41.228303    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:41.229151    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:41.718297    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:41.718297    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:41.718297    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:41.718297    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:41.722911    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:41.723536    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:41.723536    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:41 GMT
	I0719 05:57:41.723536    5884 round_trippers.go:580]     Audit-Id: 0b317b1e-389f-485a-8fda-59079b240f72
	I0719 05:57:41.723536    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:41.723536    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:41.723536    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:41.723536    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:41.723737    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:41.724592    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:41.724592    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:41.724592    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:41.724592    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:41.727703    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:41.728410    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:41.728410    5884 round_trippers.go:580]     Audit-Id: b8570dee-cf96-4099-b8d7-d3fe80a2d921
	I0719 05:57:41.728410    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:41.728410    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:41.728410    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:41.728410    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:41.728460    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:41 GMT
	I0719 05:57:41.728730    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:42.218535    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:42.218601    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:42.218601    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:42.218658    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:42.222545    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:42.223179    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:42.223280    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:42.223327    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:42 GMT
	I0719 05:57:42.223327    5884 round_trippers.go:580]     Audit-Id: f741ce2a-e14e-48d3-828c-f3be28ee8a8c
	I0719 05:57:42.223327    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:42.223327    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:42.223327    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:42.223327    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:42.224096    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:42.224233    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:42.224233    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:42.224233    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:42.228475    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:42.228475    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:42.228475    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:42.228475    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:42.228475    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:42.228475    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:42 GMT
	I0719 05:57:42.228475    5884 round_trippers.go:580]     Audit-Id: 0b97a496-4726-4f0b-a70e-cc1e05908846
	I0719 05:57:42.228475    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:42.229394    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:42.717172    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:42.717172    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:42.717172    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:42.717172    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:42.722197    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:42.722197    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:42.722197    5884 round_trippers.go:580]     Audit-Id: 78eaed26-b2ff-4d4b-bb6d-b1c7f01b7694
	I0719 05:57:42.722197    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:42.722197    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:42.722197    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:42.722197    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:42.722197    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:42 GMT
	I0719 05:57:42.722590    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:42.723306    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:42.723306    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:42.723306    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:42.723306    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:42.725898    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:42.725898    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:42.725898    5884 round_trippers.go:580]     Audit-Id: b57b47c5-2fca-44bd-9fbe-2178973f84f0
	I0719 05:57:42.725898    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:42.726624    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:42.726624    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:42.726624    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:42.726624    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:42 GMT
	I0719 05:57:42.726880    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:42.727449    5884 pod_ready.go:102] pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace has status "Ready":"False"
	I0719 05:57:43.218392    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:43.218506    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:43.218506    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:43.218506    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:43.222663    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:43.222663    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:43.222663    5884 round_trippers.go:580]     Audit-Id: 6a39a8d7-25cb-4291-ad2c-c3dea9fb549f
	I0719 05:57:43.222663    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:43.223007    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:43.223007    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:43.223007    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:43.223007    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:43 GMT
	I0719 05:57:43.223327    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:43.224005    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:43.224005    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:43.224078    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:43.224078    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:43.226254    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:43.226799    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:43.226882    5884 round_trippers.go:580]     Audit-Id: c0fa95e4-2fa5-48bd-8e54-c8e77c98702f
	I0719 05:57:43.226882    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:43.226964    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:43.226964    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:43.226964    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:43.226964    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:43 GMT
	I0719 05:57:43.227094    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:43.716296    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:43.716409    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:43.716409    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:43.716409    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:43.720100    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:43.720100    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:43.720100    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:43.720100    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:43.720100    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:43 GMT
	I0719 05:57:43.720100    5884 round_trippers.go:580]     Audit-Id: c1f1a8e6-2e17-4f1d-9bab-38c3b49fa869
	I0719 05:57:43.720100    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:43.720100    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:43.721325    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:43.721581    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:43.722100    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:43.722100    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:43.722100    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:43.727545    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:43.727545    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:43.727545    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:43 GMT
	I0719 05:57:43.727605    5884 round_trippers.go:580]     Audit-Id: fb23ca2d-7471-45e0-8e93-c469a9c33d56
	I0719 05:57:43.727605    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:43.727628    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:43.727654    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:43.727688    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:43.727863    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:44.218623    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:44.218623    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:44.218623    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:44.218623    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:44.222250    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:44.222739    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:44.222739    5884 round_trippers.go:580]     Audit-Id: 74f90e38-5c6c-4d0d-b92d-7dc563759c20
	I0719 05:57:44.222739    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:44.222739    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:44.222739    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:44.222739    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:44.222739    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:44 GMT
	I0719 05:57:44.223022    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:44.223885    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:44.223885    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:44.223885    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:44.223885    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:44.226651    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:44.226651    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:44.227381    5884 round_trippers.go:580]     Audit-Id: 05021496-3c43-4e00-ab4e-28b188fe8fca
	I0719 05:57:44.227522    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:44.227522    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:44.227571    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:44.227571    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:44.227571    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:44 GMT
	I0719 05:57:44.227571    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:44.718308    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:44.718472    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:44.718472    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:44.718472    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:44.723359    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:44.723359    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:44.723458    5884 round_trippers.go:580]     Audit-Id: 42876ad6-1ff2-46e9-b88e-1d40358ecca3
	I0719 05:57:44.723458    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:44.723458    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:44.723544    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:44.723544    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:44.723544    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:44 GMT
	I0719 05:57:44.723682    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:44.724805    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:44.724805    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:44.724900    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:44.724900    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:44.728092    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:44.728092    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:44.728092    5884 round_trippers.go:580]     Audit-Id: c5840966-3d21-47ae-88b5-494224da81b5
	I0719 05:57:44.728092    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:44.728092    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:44.728092    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:44.728092    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:44.728092    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:44 GMT
	I0719 05:57:44.728856    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:44.729518    5884 pod_ready.go:102] pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace has status "Ready":"False"
	I0719 05:57:45.213970    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:45.213970    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:45.213970    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:45.213970    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:45.217568    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:45.217568    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:45.217568    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:45.218148    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:45.218148    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:45.218148    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:45 GMT
	I0719 05:57:45.218148    5884 round_trippers.go:580]     Audit-Id: e6a021f7-f3a1-4449-83ec-c627c89c7499
	I0719 05:57:45.218148    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:45.218410    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:45.219407    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:45.219407    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:45.219407    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:45.219407    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:45.223252    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:45.223252    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:45.223252    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:45 GMT
	I0719 05:57:45.223252    5884 round_trippers.go:580]     Audit-Id: 4925b829-0c12-4702-a0ea-f9cbb370e6dc
	I0719 05:57:45.223252    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:45.223252    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:45.223252    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:45.223252    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:45.223252    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:45.712711    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:45.712800    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:45.712800    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:45.712912    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:45.716655    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:45.717158    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:45.717158    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:45.717158    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:45.717233    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:45 GMT
	I0719 05:57:45.717233    5884 round_trippers.go:580]     Audit-Id: 64331b88-0ec2-4f96-8fbf-9535f79361a4
	I0719 05:57:45.717233    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:45.717289    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:45.717471    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:45.718188    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:45.718188    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:45.718351    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:45.718351    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:45.722329    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:45.722329    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:45.722329    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:45.722329    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:45.722329    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:45 GMT
	I0719 05:57:45.722329    5884 round_trippers.go:580]     Audit-Id: 6d6a4fe7-d624-4699-b65f-b71161a1c450
	I0719 05:57:45.722329    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:45.722329    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:45.722329    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:46.217402    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:46.217526    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:46.217526    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:46.217526    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:46.220391    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:46.220391    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:46.221389    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:46.221389    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:46.221389    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:46 GMT
	I0719 05:57:46.221466    5884 round_trippers.go:580]     Audit-Id: d44663a1-24b0-472e-b10b-6aa5aed482eb
	I0719 05:57:46.221466    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:46.221466    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:46.221840    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:46.223324    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:46.223368    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:46.223368    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:46.223368    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:46.225674    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:46.225674    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:46.225674    5884 round_trippers.go:580]     Audit-Id: 98465590-ac73-4b3d-bd20-980c894fe860
	I0719 05:57:46.225674    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:46.225674    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:46.225674    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:46.225674    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:46.225674    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:46 GMT
	I0719 05:57:46.226566    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:46.720068    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:46.720068    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:46.720162    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:46.720162    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:46.725635    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:46.725635    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:46.725635    5884 round_trippers.go:580]     Audit-Id: 5ba8fee6-ab1e-40a2-822a-ef0843396572
	I0719 05:57:46.725635    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:46.725635    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:46.726467    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:46.726467    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:46.726467    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:46 GMT
	I0719 05:57:46.727005    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:46.727959    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:46.727959    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:46.727959    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:46.727959    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:46.731582    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:46.731582    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:46.731722    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:46.731722    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:46.731722    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:46.731722    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:46.731722    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:46 GMT
	I0719 05:57:46.731722    5884 round_trippers.go:580]     Audit-Id: 7740db36-712e-4f61-a783-394603e1fe1c
	I0719 05:57:46.732481    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:46.732979    5884 pod_ready.go:102] pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace has status "Ready":"False"
	I0719 05:57:47.220527    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:47.220527    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:47.220527    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:47.220527    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:47.224196    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:47.224196    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:47.224286    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:47 GMT
	I0719 05:57:47.224286    5884 round_trippers.go:580]     Audit-Id: 15a252ac-5573-49ec-97ba-1117fb2cb512
	I0719 05:57:47.224286    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:47.224286    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:47.224286    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:47.224286    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:47.224476    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:47.225462    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:47.225528    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:47.225528    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:47.225528    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:47.227832    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:47.227832    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:47.228574    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:47.228574    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:47.228574    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:47.228574    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:47 GMT
	I0719 05:57:47.228574    5884 round_trippers.go:580]     Audit-Id: 6d2c0a51-e0c9-46cc-8ab8-420b153c9b8c
	I0719 05:57:47.228574    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:47.228903    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:47.720889    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:47.720889    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:47.721042    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:47.721042    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:47.730436    5884 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0719 05:57:47.730773    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:47.730773    5884 round_trippers.go:580]     Audit-Id: 965b04ea-429f-43a3-a875-6a3530ef66ad
	I0719 05:57:47.730773    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:47.730773    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:47.730773    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:47.730773    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:47.730878    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:47 GMT
	I0719 05:57:47.733445    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:47.734250    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:47.734250    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:47.734250    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:47.734250    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:47.740755    5884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 05:57:47.740755    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:47.740755    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:47.740755    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:47.740755    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:47 GMT
	I0719 05:57:47.740755    5884 round_trippers.go:580]     Audit-Id: 99d342a0-9f7f-4386-849d-ca4f3edda4f6
	I0719 05:57:47.740755    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:47.740755    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:47.740755    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:48.222768    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:48.222927    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:48.222927    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:48.222927    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:48.227863    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:48.227971    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:48.228146    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:48.228146    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:48.228146    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:48.228146    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:48.228146    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:48 GMT
	I0719 05:57:48.228146    5884 round_trippers.go:580]     Audit-Id: 17efbd44-f15d-4e8a-a229-7b279b0ca2ec
	I0719 05:57:48.228297    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:48.229030    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:48.229030    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:48.229030    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:48.229030    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:48.232344    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:48.232344    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:48.232344    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:48.232344    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:48.232344    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:48.232344    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:48.232344    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:48 GMT
	I0719 05:57:48.232344    5884 round_trippers.go:580]     Audit-Id: aca9bff1-9483-486d-bec4-59f257607d55
	I0719 05:57:48.233300    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:48.711506    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:48.711506    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:48.711506    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:48.711506    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:48.717076    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:48.717076    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:48.717076    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:48 GMT
	I0719 05:57:48.717076    5884 round_trippers.go:580]     Audit-Id: 726dce85-9290-4920-9f3d-8c677438fe8b
	I0719 05:57:48.717076    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:48.717076    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:48.717076    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:48.717076    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:48.718086    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:48.718086    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:48.718086    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:48.718086    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:48.718086    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:48.721095    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:48.721095    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:48.721095    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:48.721095    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:48.721095    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:48 GMT
	I0719 05:57:48.721095    5884 round_trippers.go:580]     Audit-Id: 80344adc-49b7-4373-8dc3-58be220e328a
	I0719 05:57:48.721095    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:48.721095    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:48.722092    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:49.217258    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:49.217523    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:49.217523    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:49.217523    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:49.221117    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:49.222066    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:49.222099    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:49.222099    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:49.222099    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:49 GMT
	I0719 05:57:49.222099    5884 round_trippers.go:580]     Audit-Id: 5d1f6e2d-9c90-41d1-ba1e-1f226745715c
	I0719 05:57:49.222099    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:49.222099    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:49.222377    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:49.223149    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:49.223234    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:49.223234    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:49.223234    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:49.227153    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:49.227153    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:49.227153    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:49.227560    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:49.227560    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:49 GMT
	I0719 05:57:49.227560    5884 round_trippers.go:580]     Audit-Id: 2cc4af4d-c324-40e7-9720-33d56fb658e5
	I0719 05:57:49.227560    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:49.227560    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:49.227954    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:49.228219    5884 pod_ready.go:102] pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace has status "Ready":"False"
	I0719 05:57:49.712735    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:49.712735    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:49.712735    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:49.712735    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:49.715748    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:49.716482    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:49.716482    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:49.716482    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:49.716601    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:49 GMT
	I0719 05:57:49.716601    5884 round_trippers.go:580]     Audit-Id: 6eacfce9-e49d-479a-a42e-e472738aafe6
	I0719 05:57:49.716601    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:49.716601    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:49.716846    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:49.717695    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:49.717764    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:49.717764    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:49.717764    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:49.719764    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:49.720344    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:49.720344    5884 round_trippers.go:580]     Audit-Id: 6e0ca09d-55ca-4571-b079-3fac93127ac9
	I0719 05:57:49.720344    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:49.720344    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:49.720344    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:49.720344    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:49.720344    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:49 GMT
	I0719 05:57:49.720617    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:50.220970    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:50.221042    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:50.221042    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:50.221042    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:50.225780    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:50.225780    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:50.225780    5884 round_trippers.go:580]     Audit-Id: 0fc439a9-db2c-4c60-a543-de18fca024ed
	I0719 05:57:50.225780    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:50.225780    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:50.225780    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:50.225780    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:50.225780    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:50 GMT
	I0719 05:57:50.225780    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1955","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7019 chars]
	I0719 05:57:50.227091    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:50.227091    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:50.227091    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:50.227091    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:50.230680    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:50.230680    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:50.230680    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:50.230680    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:50.231028    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:50.231028    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:50.231028    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:50 GMT
	I0719 05:57:50.231028    5884 round_trippers.go:580]     Audit-Id: d8153fc0-3dfc-428f-a38a-52e750c50586
	I0719 05:57:50.231304    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:50.718489    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:50.718489    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:50.718602    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:50.718602    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:50.726910    5884 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 05:57:50.727026    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:50.727026    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:50.727026    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:50 GMT
	I0719 05:57:50.727148    5884 round_trippers.go:580]     Audit-Id: 1d830e8e-4a2b-42bb-9c73-a68f59180970
	I0719 05:57:50.727163    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:50.727163    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:50.727163    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:50.727262    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1955","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7019 chars]
	I0719 05:57:50.728134    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:50.728134    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:50.728134    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:50.728134    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:50.732120    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:50.732120    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:50.732120    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:50.732120    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:50.732120    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:50.732120    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:50.732120    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:50 GMT
	I0719 05:57:50.732120    5884 round_trippers.go:580]     Audit-Id: 1a7f5c7e-7488-4d52-ad44-e05fddd8b827
	I0719 05:57:50.732873    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:51.221135    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:51.221135    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.221135    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.221135    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.226011    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:51.226011    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.226011    5884 round_trippers.go:580]     Audit-Id: 3b86e50a-2482-4847-9fb3-5a796c9c585e
	I0719 05:57:51.226223    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.226223    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.226223    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.226223    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.226223    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.227209    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1958","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0719 05:57:51.228281    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:51.228281    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.228354    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.228354    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.230816    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:51.230816    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.230816    5884 round_trippers.go:580]     Audit-Id: 89f457d6-c211-4a02-aaf9-a26a3533e73a
	I0719 05:57:51.230816    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.230816    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.230816    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.230816    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.231567    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.231851    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:51.231983    5884 pod_ready.go:92] pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace has status "Ready":"True"
	I0719 05:57:51.231983    5884 pod_ready.go:81] duration metric: took 20.0212156s for pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.231983    5884 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.231983    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-761300
	I0719 05:57:51.231983    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.231983    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.231983    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.235153    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:51.235153    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.235153    5884 round_trippers.go:580]     Audit-Id: 34b272cd-7e34-499e-a365-7a2c0a63a4cb
	I0719 05:57:51.235153    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.235153    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.235153    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.235153    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.235153    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.235153    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-761300","namespace":"kube-system","uid":"296a455d-9236-4939-b002-5fa6dd843880","resourceVersion":"1908","creationTimestamp":"2024-07-19T05:57:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.162.149:2379","kubernetes.io/config.hash":"581155a4bfbbdcf98e106c8ce8e86c2b","kubernetes.io/config.mirror":"581155a4bfbbdcf98e106c8ce8e86c2b","kubernetes.io/config.seen":"2024-07-19T05:57:10.588894693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:57:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0719 05:57:51.236099    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:51.236099    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.236099    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.236099    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.239072    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:51.239072    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.239072    5884 round_trippers.go:580]     Audit-Id: 74b85c23-133e-4de2-ab47-73b5ca6090ec
	I0719 05:57:51.239072    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.239072    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.239072    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.239072    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.239072    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.239537    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:51.239954    5884 pod_ready.go:92] pod "etcd-multinode-761300" in "kube-system" namespace has status "Ready":"True"
	I0719 05:57:51.240041    5884 pod_ready.go:81] duration metric: took 8.0578ms for pod "etcd-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.240092    5884 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.240218    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-761300
	I0719 05:57:51.240218    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.240253    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.240253    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.243763    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:51.243763    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.244217    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.244217    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.244217    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.244280    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.244280    5884 round_trippers.go:580]     Audit-Id: 4a6ecf9c-fd19-4baa-808d-0098b9591c4b
	I0719 05:57:51.244329    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.244554    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-761300","namespace":"kube-system","uid":"89d493c7-c827-467c-ae64-9cdb2b5061df","resourceVersion":"1907","creationTimestamp":"2024-07-19T05:57:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.162.149:8443","kubernetes.io/config.hash":"b21ce007ca118b4c86324a165dd45eec","kubernetes.io/config.mirror":"b21ce007ca118b4c86324a165dd45eec","kubernetes.io/config.seen":"2024-07-19T05:57:10.501200307Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:57:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0719 05:57:51.244902    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:51.244902    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.244902    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.244902    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.250533    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:51.250533    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.250533    5884 round_trippers.go:580]     Audit-Id: aad45296-9001-4f10-a912-fd9dff53633a
	I0719 05:57:51.251238    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.251238    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.251238    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.251238    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.251238    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.251440    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:51.251489    5884 pod_ready.go:92] pod "kube-apiserver-multinode-761300" in "kube-system" namespace has status "Ready":"True"
	I0719 05:57:51.251489    5884 pod_ready.go:81] duration metric: took 11.3965ms for pod "kube-apiserver-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.251489    5884 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.251489    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-761300
	I0719 05:57:51.251489    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.251489    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.251489    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.254301    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:51.254301    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.254301    5884 round_trippers.go:580]     Audit-Id: 8380bf7f-bf20-4aa2-8016-d9db909fbe69
	I0719 05:57:51.254301    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.254301    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.255067    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.255152    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.255152    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.255152    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-761300","namespace":"kube-system","uid":"2124834c-1961-49fb-8699-fba2fc5dd0ac","resourceVersion":"1898","creationTimestamp":"2024-07-19T05:33:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"91d2984bea90586f6ba6d94e358920eb","kubernetes.io/config.mirror":"91d2984bea90586f6ba6d94e358920eb","kubernetes.io/config.seen":"2024-07-19T05:33:02.001207967Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0719 05:57:51.255973    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:51.256002    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.256002    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.256002    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.259399    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:51.259399    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.259399    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.259399    5884 round_trippers.go:580]     Audit-Id: 1d67fa31-0e86-4ee5-993e-36f6d2ca3af4
	I0719 05:57:51.259399    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.259399    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.259399    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.259399    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.259399    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:51.260451    5884 pod_ready.go:92] pod "kube-controller-manager-multinode-761300" in "kube-system" namespace has status "Ready":"True"
	I0719 05:57:51.260482    5884 pod_ready.go:81] duration metric: took 8.993ms for pod "kube-controller-manager-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.260549    5884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c48b9" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.260626    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c48b9
	I0719 05:57:51.260626    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.260680    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.260680    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.262876    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:51.263747    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.263747    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.263747    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.263747    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.263747    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.263747    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.263747    5884 round_trippers.go:580]     Audit-Id: 79b33d47-672f-4b8e-b236-fd41057288d6
	I0719 05:57:51.264036    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c48b9","generateName":"kube-proxy-","namespace":"kube-system","uid":"67e2ee42-a2c4-4ed1-a2bf-840702a255b4","resourceVersion":"1764","creationTimestamp":"2024-07-19T05:41:15Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06c026b7-a7b7-4276-a86c-fc9c51f31e4e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:41:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06c026b7-a7b7-4276-a86c-fc9c51f31e4e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0719 05:57:51.264776    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300-m03
	I0719 05:57:51.264776    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.264776    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.264776    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.267236    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:51.268302    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.268374    5884 round_trippers.go:580]     Audit-Id: afc98435-0a92-4a19-9bf6-3d52aede6336
	I0719 05:57:51.268374    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.268374    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.268374    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.268374    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.268374    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.268374    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m03","uid":"b19fd562-f462-4172-835f-56c42463b282","resourceVersion":"1919","creationTimestamp":"2024-07-19T05:52:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_52_28_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:52:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4303 chars]
	I0719 05:57:51.268905    5884 pod_ready.go:97] node "multinode-761300-m03" hosting pod "kube-proxy-c48b9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300-m03" has status "Ready":"Unknown"
	I0719 05:57:51.268905    5884 pod_ready.go:81] duration metric: took 8.3565ms for pod "kube-proxy-c48b9" in "kube-system" namespace to be "Ready" ...
	E0719 05:57:51.268905    5884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-761300-m03" hosting pod "kube-proxy-c48b9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300-m03" has status "Ready":"Unknown"
	I0719 05:57:51.268905    5884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c4z7f" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.422778    5884 request.go:629] Waited for 153.6388ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4z7f
	I0719 05:57:51.423070    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4z7f
	I0719 05:57:51.423070    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.423070    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.423070    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.426778    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:51.426778    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.426778    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.426778    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.426778    5884 round_trippers.go:580]     Audit-Id: 5e3c79db-d84d-42f3-b4c7-204e5ac5dd41
	I0719 05:57:51.427253    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.427253    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.427253    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.427535    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c4z7f","generateName":"kube-proxy-","namespace":"kube-system","uid":"17ff8aac-2d57-44fb-a3ec-f0d6ea181881","resourceVersion":"1888","creationTimestamp":"2024-07-19T05:33:15Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06c026b7-a7b7-4276-a86c-fc9c51f31e4e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06c026b7-a7b7-4276-a86c-fc9c51f31e4e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0719 05:57:51.625209    5884 request.go:629] Waited for 197.5161ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:51.625416    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:51.625416    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.625416    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.625416    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.629787    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:51.629787    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.630210    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.630210    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.630210    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.630210    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.630210    5884 round_trippers.go:580]     Audit-Id: 6ea6f8b5-8e83-4064-87f2-9aef9763e761
	I0719 05:57:51.630210    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.630771    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:51.631847    5884 pod_ready.go:92] pod "kube-proxy-c4z7f" in "kube-system" namespace has status "Ready":"True"
	I0719 05:57:51.631930    5884 pod_ready.go:81] duration metric: took 362.9373ms for pod "kube-proxy-c4z7f" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.631930    5884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mjv8l" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.828293    5884 request.go:629] Waited for 196.2434ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjv8l
	I0719 05:57:51.828293    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjv8l
	I0719 05:57:51.828293    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.828293    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.828293    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.832029    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:51.832900    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.832900    5884 round_trippers.go:580]     Audit-Id: a302d55d-8100-4c79-a14c-3996a03e2026
	I0719 05:57:51.832900    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.832900    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.832900    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.832900    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.832900    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.833281    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mjv8l","generateName":"kube-proxy-","namespace":"kube-system","uid":"4d0f7d34-4031-46d3-a580-a2d080d9d335","resourceVersion":"1787","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06c026b7-a7b7-4276-a86c-fc9c51f31e4e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06c026b7-a7b7-4276-a86c-fc9c51f31e4e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0719 05:57:52.031086    5884 request.go:629] Waited for 196.9262ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:57:52.031086    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:57:52.031086    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:52.031086    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:52.031086    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:52.034804    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:52.035434    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:52.035434    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:52.035434    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:52.035434    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:52 GMT
	I0719 05:57:52.035434    5884 round_trippers.go:580]     Audit-Id: 3bb17b35-539c-4992-b8d6-5dfcc1b3cac7
	I0719 05:57:52.035434    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:52.035434    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:52.035861    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"1937","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4583 chars]
	I0719 05:57:52.036411    5884 pod_ready.go:97] node "multinode-761300-m02" hosting pod "kube-proxy-mjv8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300-m02" has status "Ready":"Unknown"
	I0719 05:57:52.036411    5884 pod_ready.go:81] duration metric: took 404.4755ms for pod "kube-proxy-mjv8l" in "kube-system" namespace to be "Ready" ...
	E0719 05:57:52.036493    5884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-761300-m02" hosting pod "kube-proxy-mjv8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300-m02" has status "Ready":"Unknown"
	I0719 05:57:52.036493    5884 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:52.234618    5884 request.go:629] Waited for 197.8535ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-761300
	I0719 05:57:52.234618    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-761300
	I0719 05:57:52.234618    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:52.234618    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:52.234618    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:52.239206    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:52.239479    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:52.239479    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:52.239479    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:52.239549    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:52.239549    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:52 GMT
	I0719 05:57:52.239549    5884 round_trippers.go:580]     Audit-Id: 5bfb3126-1f55-432e-a414-3f9e6d2444ff
	I0719 05:57:52.239549    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:52.239777    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-761300","namespace":"kube-system","uid":"49a739d1-1ae3-4a41-aebc-0eb7b2b4f242","resourceVersion":"1924","creationTimestamp":"2024-07-19T05:33:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"baa57cf06d1c9cb3264d7de745e86d00","kubernetes.io/config.mirror":"baa57cf06d1c9cb3264d7de745e86d00","kubernetes.io/config.seen":"2024-07-19T05:33:02.001209067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0719 05:57:52.422255    5884 request.go:629] Waited for 181.6283ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:52.422255    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:52.422502    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:52.422502    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:52.422580    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:52.428937    5884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 05:57:52.429006    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:52.429141    5884 round_trippers.go:580]     Audit-Id: bc7ad7e0-2aa5-43a6-8f77-69bcc1088a8a
	I0719 05:57:52.429166    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:52.429166    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:52.429166    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:52.429166    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:52.429166    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:52 GMT
	I0719 05:57:52.430069    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:52.430721    5884 pod_ready.go:92] pod "kube-scheduler-multinode-761300" in "kube-system" namespace has status "Ready":"True"
	I0719 05:57:52.430721    5884 pod_ready.go:81] duration metric: took 394.2235ms for pod "kube-scheduler-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:52.430721    5884 pod_ready.go:38] duration metric: took 21.2372723s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:57:52.430721    5884 api_server.go:52] waiting for apiserver process to appear ...
	I0719 05:57:52.444169    5884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:57:52.472914    5884 command_runner.go:130] > 1971
	I0719 05:57:52.473512    5884 api_server.go:72] duration metric: took 31.641063s to wait for apiserver process to appear ...
	I0719 05:57:52.473512    5884 api_server.go:88] waiting for apiserver healthz status ...
	I0719 05:57:52.473512    5884 api_server.go:253] Checking apiserver healthz at https://172.28.162.149:8443/healthz ...
	I0719 05:57:52.482066    5884 api_server.go:279] https://172.28.162.149:8443/healthz returned 200:
	ok
	I0719 05:57:52.482808    5884 round_trippers.go:463] GET https://172.28.162.149:8443/version
	I0719 05:57:52.482808    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:52.482808    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:52.482927    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:52.485605    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:52.485814    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:52.485902    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:52.485902    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:52.485902    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:52.485947    5884 round_trippers.go:580]     Content-Length: 263
	I0719 05:57:52.485947    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:52 GMT
	I0719 05:57:52.485947    5884 round_trippers.go:580]     Audit-Id: b820f40b-3f7b-46e0-9d59-77e09d96c67b
	I0719 05:57:52.485947    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:52.485947    5884 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0719 05:57:52.486023    5884 api_server.go:141] control plane version: v1.30.3
	I0719 05:57:52.486098    5884 api_server.go:131] duration metric: took 12.5857ms to wait for apiserver health ...
	I0719 05:57:52.486098    5884 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 05:57:52.629086    5884 request.go:629] Waited for 142.6691ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods
	I0719 05:57:52.629143    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods
	I0719 05:57:52.629143    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:52.629272    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:52.629272    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:52.641698    5884 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0719 05:57:52.642540    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:52.642540    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:52.642540    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:52.642540    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:52.642540    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:52.642540    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:52 GMT
	I0719 05:57:52.642540    5884 round_trippers.go:580]     Audit-Id: 6f7ac419-e62a-4de2-b3f2-b98f4439ac78
	I0719 05:57:52.645686    5884 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1962"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1958","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87033 chars]
	I0719 05:57:52.649688    5884 system_pods.go:59] 12 kube-system pods found
	I0719 05:57:52.649688    5884 system_pods.go:61] "coredns-7db6d8ff4d-hw9kh" [d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4] Running
	I0719 05:57:52.649688    5884 system_pods.go:61] "etcd-multinode-761300" [296a455d-9236-4939-b002-5fa6dd843880] Running
	I0719 05:57:52.649688    5884 system_pods.go:61] "kindnet-22ts9" [0d3c5a3b-fa22-4542-b9a5-478056ccc9cc] Running
	I0719 05:57:52.649688    5884 system_pods.go:61] "kindnet-6wxhn" [c0859b76-8ace-4de2-a940-4344594c5d27] Running
	I0719 05:57:52.649688    5884 system_pods.go:61] "kindnet-dj497" [124722d1-6c9c-4de4-b242-2f58e89b223b] Running
	I0719 05:57:52.649688    5884 system_pods.go:61] "kube-apiserver-multinode-761300" [89d493c7-c827-467c-ae64-9cdb2b5061df] Running
	I0719 05:57:52.649688    5884 system_pods.go:61] "kube-controller-manager-multinode-761300" [2124834c-1961-49fb-8699-fba2fc5dd0ac] Running
	I0719 05:57:52.650996    5884 system_pods.go:61] "kube-proxy-c48b9" [67e2ee42-a2c4-4ed1-a2bf-840702a255b4] Running
	I0719 05:57:52.650996    5884 system_pods.go:61] "kube-proxy-c4z7f" [17ff8aac-2d57-44fb-a3ec-f0d6ea181881] Running
	I0719 05:57:52.650996    5884 system_pods.go:61] "kube-proxy-mjv8l" [4d0f7d34-4031-46d3-a580-a2d080d9d335] Running
	I0719 05:57:52.650996    5884 system_pods.go:61] "kube-scheduler-multinode-761300" [49a739d1-1ae3-4a41-aebc-0eb7b2b4f242] Running
	I0719 05:57:52.650996    5884 system_pods.go:61] "storage-provisioner" [87c864ea-0853-481c-ab24-2ab209760f69] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 05:57:52.650996    5884 system_pods.go:74] duration metric: took 164.8963ms to wait for pod list to return data ...
	I0719 05:57:52.650996    5884 default_sa.go:34] waiting for default service account to be created ...
	I0719 05:57:52.831860    5884 request.go:629] Waited for 180.6946ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/namespaces/default/serviceaccounts
	I0719 05:57:52.831860    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/default/serviceaccounts
	I0719 05:57:52.831860    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:52.831860    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:52.831860    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:52.836455    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:52.836561    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:52.836561    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:52 GMT
	I0719 05:57:52.836561    5884 round_trippers.go:580]     Audit-Id: c5c1b363-db93-4439-8355-30334e4075bc
	I0719 05:57:52.836561    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:52.836561    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:52.836561    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:52.836561    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:52.836561    5884 round_trippers.go:580]     Content-Length: 262
	I0719 05:57:52.836707    5884 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1962"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"401ce23d-5c82-4e9b-b140-9f6a95fa53e6","resourceVersion":"308","creationTimestamp":"2024-07-19T05:33:15Z"}}]}
	I0719 05:57:52.837038    5884 default_sa.go:45] found service account: "default"
	I0719 05:57:52.837119    5884 default_sa.go:55] duration metric: took 186.1206ms for default service account to be created ...
	I0719 05:57:52.837119    5884 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 05:57:53.035066    5884 request.go:629] Waited for 197.8603ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods
	I0719 05:57:53.035494    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods
	I0719 05:57:53.035494    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:53.035556    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:53.035556    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:53.042087    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:53.042087    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:53.042087    5884 round_trippers.go:580]     Audit-Id: 1b30c49a-6623-4e45-9966-8fd1b5cb47c9
	I0719 05:57:53.042087    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:53.042087    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:53.042087    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:53.042087    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:53.042087    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:53 GMT
	I0719 05:57:53.043399    5884 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1962"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1958","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87033 chars]
	I0719 05:57:53.046985    5884 system_pods.go:86] 12 kube-system pods found
	I0719 05:57:53.046985    5884 system_pods.go:89] "coredns-7db6d8ff4d-hw9kh" [d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "etcd-multinode-761300" [296a455d-9236-4939-b002-5fa6dd843880] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "kindnet-22ts9" [0d3c5a3b-fa22-4542-b9a5-478056ccc9cc] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "kindnet-6wxhn" [c0859b76-8ace-4de2-a940-4344594c5d27] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "kindnet-dj497" [124722d1-6c9c-4de4-b242-2f58e89b223b] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "kube-apiserver-multinode-761300" [89d493c7-c827-467c-ae64-9cdb2b5061df] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "kube-controller-manager-multinode-761300" [2124834c-1961-49fb-8699-fba2fc5dd0ac] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "kube-proxy-c48b9" [67e2ee42-a2c4-4ed1-a2bf-840702a255b4] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "kube-proxy-c4z7f" [17ff8aac-2d57-44fb-a3ec-f0d6ea181881] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "kube-proxy-mjv8l" [4d0f7d34-4031-46d3-a580-a2d080d9d335] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "kube-scheduler-multinode-761300" [49a739d1-1ae3-4a41-aebc-0eb7b2b4f242] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "storage-provisioner" [87c864ea-0853-481c-ab24-2ab209760f69] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 05:57:53.046985    5884 system_pods.go:126] duration metric: took 209.8641ms to wait for k8s-apps to be running ...
	I0719 05:57:53.046985    5884 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 05:57:53.064248    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 05:57:53.087450    5884 system_svc.go:56] duration metric: took 40.4641ms WaitForService to wait for kubelet
	I0719 05:57:53.087450    5884 kubeadm.go:582] duration metric: took 32.2549938s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 05:57:53.087450    5884 node_conditions.go:102] verifying NodePressure condition ...
	I0719 05:57:53.226268    5884 request.go:629] Waited for 138.8164ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/nodes
	I0719 05:57:53.226268    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes
	I0719 05:57:53.226268    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:53.226268    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:53.226268    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:53.231834    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:53.231834    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:53.231834    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:53.231834    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:53.231834    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:53.231834    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:53.232040    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:53 GMT
	I0719 05:57:53.232040    5884 round_trippers.go:580]     Audit-Id: 2fa0269a-7d66-4710-a577-440d4fc894e5
	I0719 05:57:53.232418    5884 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1962"},"items":[{"metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16163 chars]
	I0719 05:57:53.233510    5884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 05:57:53.233619    5884 node_conditions.go:123] node cpu capacity is 2
	I0719 05:57:53.233619    5884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 05:57:53.233619    5884 node_conditions.go:123] node cpu capacity is 2
	I0719 05:57:53.233619    5884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 05:57:53.233619    5884 node_conditions.go:123] node cpu capacity is 2
	I0719 05:57:53.233619    5884 node_conditions.go:105] duration metric: took 146.1669ms to run NodePressure ...
	I0719 05:57:53.233619    5884 start.go:241] waiting for startup goroutines ...
	I0719 05:57:53.233619    5884 start.go:246] waiting for cluster config update ...
	I0719 05:57:53.233619    5884 start.go:255] writing updated cluster config ...
	I0719 05:57:53.239217    5884 out.go:177] 
	I0719 05:57:53.242551    5884 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:57:53.254186    5884 config.go:182] Loaded profile config "multinode-761300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:57:53.254489    5884 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\config.json ...
	I0719 05:57:53.263237    5884 out.go:177] * Starting "multinode-761300-m02" worker node in "multinode-761300" cluster
	I0719 05:57:53.268139    5884 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 05:57:53.268139    5884 cache.go:56] Caching tarball of preloaded images
	I0719 05:57:53.268473    5884 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 05:57:53.268473    5884 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 05:57:53.268810    5884 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\config.json ...
	I0719 05:57:53.270587    5884 start.go:360] acquireMachinesLock for multinode-761300-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 05:57:53.270587    5884 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-761300-m02"
	I0719 05:57:53.271523    5884 start.go:96] Skipping create...Using existing machine configuration
	I0719 05:57:53.271523    5884 fix.go:54] fixHost starting: m02
	I0719 05:57:53.271745    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:57:55.507297    5884 main.go:141] libmachine: [stdout =====>] : Off
	
	I0719 05:57:55.507854    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:57:55.507854    5884 fix.go:112] recreateIfNeeded on multinode-761300-m02: state=Stopped err=<nil>
	W0719 05:57:55.507934    5884 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 05:57:55.514661    5884 out.go:177] * Restarting existing hyperv VM for "multinode-761300-m02" ...
	I0719 05:57:55.519398    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-761300-m02
	I0719 05:57:58.705517    5884 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:57:58.705517    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:57:58.705517    5884 main.go:141] libmachine: Waiting for host to start...
	I0719 05:57:58.705517    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:01.060350    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:01.060446    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:01.060446    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:03.674968    5884 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:58:03.674968    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:04.681739    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:06.976738    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:06.976935    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:06.976935    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:09.600887    5884 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:58:09.600887    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:10.602595    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:12.929180    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:12.929180    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:12.929951    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:15.693600    5884 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:58:15.694601    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:16.701064    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:19.060925    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:19.060925    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:19.060925    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:21.673532    5884 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:58:21.674525    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:22.678437    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:24.979345    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:24.980435    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:24.980495    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:27.627767    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:58:27.627767    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:27.631325    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:29.915707    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:29.915707    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:29.915880    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:32.523416    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:58:32.524470    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:32.524785    5884 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\config.json ...
	I0719 05:58:32.527551    5884 machine.go:94] provisionDockerMachine start ...
	I0719 05:58:32.527672    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:34.782958    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:34.782958    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:34.783063    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:37.453792    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:58:37.453792    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:37.462364    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:58:37.462566    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.127 22 <nil> <nil>}
	I0719 05:58:37.462566    5884 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 05:58:37.583786    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 05:58:37.583786    5884 buildroot.go:166] provisioning hostname "multinode-761300-m02"
	I0719 05:58:37.583786    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:39.805060    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:39.805155    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:39.805155    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:42.472068    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:58:42.472068    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:42.477094    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:58:42.477890    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.127 22 <nil> <nil>}
	I0719 05:58:42.477890    5884 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-761300-m02 && echo "multinode-761300-m02" | sudo tee /etc/hostname
	I0719 05:58:42.643124    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-761300-m02
	
	I0719 05:58:42.643228    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:44.857252    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:44.857252    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:44.857645    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:47.457519    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:58:47.457519    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:47.466088    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:58:47.466777    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.127 22 <nil> <nil>}
	I0719 05:58:47.466777    5884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-761300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-761300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-761300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 05:58:47.607112    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 05:58:47.607112    5884 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 05:58:47.607112    5884 buildroot.go:174] setting up certificates
	I0719 05:58:47.607112    5884 provision.go:84] configureAuth start
	I0719 05:58:47.607112    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:49.777125    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:49.777125    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:49.777125    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:52.378885    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:58:52.378885    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:52.378947    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:54.559435    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:54.559435    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:54.559515    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:57.194514    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:58:57.195517    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:57.195563    5884 provision.go:143] copyHostCerts
	I0719 05:58:57.195779    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0719 05:58:57.195954    5884 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0719 05:58:57.195954    5884 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0719 05:58:57.196610    5884 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 05:58:57.197935    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0719 05:58:57.197967    5884 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0719 05:58:57.197967    5884 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0719 05:58:57.198623    5884 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 05:58:57.199553    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0719 05:58:57.199553    5884 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0719 05:58:57.199553    5884 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0719 05:58:57.200124    5884 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 05:58:57.200736    5884 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-761300-m02 san=[127.0.0.1 172.28.162.127 localhost minikube multinode-761300-m02]
	I0719 05:58:57.616219    5884 provision.go:177] copyRemoteCerts
	I0719 05:58:57.629370    5884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 05:58:57.629370    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:59.847533    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:59.848327    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:59.848327    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:59:02.469729    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:59:02.469729    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:02.470871    5884 sshutil.go:53] new ssh client: &{IP:172.28.162.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02\id_rsa Username:docker}
	I0719 05:59:02.569724    5884 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9402542s)
	I0719 05:59:02.569792    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0719 05:59:02.570440    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 05:59:02.616254    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0719 05:59:02.616644    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 05:59:02.662800    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0719 05:59:02.663310    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0719 05:59:02.707759    5884 provision.go:87] duration metric: took 15.1004621s to configureAuth
	I0719 05:59:02.707939    5884 buildroot.go:189] setting minikube options for container-runtime
	I0719 05:59:02.708388    5884 config.go:182] Loaded profile config "multinode-761300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:59:02.708388    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:59:04.892415    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:59:04.893127    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:04.893303    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:59:07.554525    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:59:07.554525    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:07.560490    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:59:07.561298    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.127 22 <nil> <nil>}
	I0719 05:59:07.561298    5884 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 05:59:07.687263    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 05:59:07.687321    5884 buildroot.go:70] root file system type: tmpfs
	I0719 05:59:07.687598    5884 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 05:59:07.687656    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:59:09.900231    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:59:09.900231    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:09.901234    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:59:12.516794    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:59:12.516794    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:12.523844    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:59:12.524517    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.127 22 <nil> <nil>}
	I0719 05:59:12.524517    5884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.162.149"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 05:59:12.683600    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.162.149
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 05:59:12.683756    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:59:14.899640    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:59:14.899804    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:14.899804    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:59:17.563280    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:59:17.563898    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:17.569640    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:59:17.569807    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.127 22 <nil> <nil>}
	I0719 05:59:17.569807    5884 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 05:59:20.074028    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 05:59:20.074100    5884 machine.go:97] duration metric: took 47.545969s to provisionDockerMachine
	I0719 05:59:20.074156    5884 start.go:293] postStartSetup for "multinode-761300-m02" (driver="hyperv")
	I0719 05:59:20.074156    5884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 05:59:20.086489    5884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 05:59:20.086489    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:59:22.308203    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:59:22.308203    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:22.309014    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:59:24.959341    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:59:24.959512    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:24.960039    5884 sshutil.go:53] new ssh client: &{IP:172.28.162.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02\id_rsa Username:docker}
	I0719 05:59:25.072245    5884 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9856949s)
	I0719 05:59:25.091228    5884 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 05:59:25.103592    5884 command_runner.go:130] > NAME=Buildroot
	I0719 05:59:25.103592    5884 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 05:59:25.103592    5884 command_runner.go:130] > ID=buildroot
	I0719 05:59:25.103592    5884 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 05:59:25.103592    5884 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 05:59:25.103945    5884 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 05:59:25.103945    5884 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 05:59:25.104125    5884 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 05:59:25.105124    5884 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
	I0719 05:59:25.105124    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /etc/ssl/certs/96042.pem
	I0719 05:59:25.117301    5884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 05:59:25.140505    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
	I0719 05:59:25.190121    5884 start.go:296] duration metric: took 5.115902s for postStartSetup
	I0719 05:59:25.190121    5884 fix.go:56] duration metric: took 1m31.9174757s for fixHost
	I0719 05:59:25.190121    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:59:27.454518    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:59:27.454783    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:27.454992    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:59:30.182789    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:59:30.182789    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:30.187447    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:59:30.188404    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.127 22 <nil> <nil>}
	I0719 05:59:30.188404    5884 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0719 05:59:30.309555    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721368770.321155784
	
	I0719 05:59:30.309555    5884 fix.go:216] guest clock: 1721368770.321155784
	I0719 05:59:30.309555    5884 fix.go:229] Guest: 2024-07-19 05:59:30.321155784 +0000 UTC Remote: 2024-07-19 05:59:25.190121 +0000 UTC m=+261.333707901 (delta=5.131034784s)
	I0719 05:59:30.309555    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:59:32.544384    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:59:32.544384    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:32.544384    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:59:35.231365    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:59:35.231365    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:35.237468    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:59:35.237618    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.127 22 <nil> <nil>}
	I0719 05:59:35.238161    5884 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721368770
	I0719 05:59:35.369957    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 05:59:30 UTC 2024
	
	I0719 05:59:35.369957    5884 fix.go:236] clock set: Fri Jul 19 05:59:30 UTC 2024
	 (err=<nil>)
	I0719 05:59:35.370090    5884 start.go:83] releasing machines lock for "multinode-761300-m02", held for 1m42.0973651s
	I0719 05:59:35.370230    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:59:37.650768    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:59:37.650768    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:37.651612    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:59:40.291332    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:59:40.291332    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:40.294470    5884 out.go:177] * Found network options:
	I0719 05:59:40.302865    5884 out.go:177]   - NO_PROXY=172.28.162.149
	W0719 05:59:40.305863    5884 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 05:59:40.308223    5884 out.go:177]   - NO_PROXY=172.28.162.149
	W0719 05:59:40.310928    5884 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 05:59:40.312870    5884 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 05:59:40.316429    5884 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 05:59:40.316429    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:59:40.329388    5884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 05:59:40.329388    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:59:42.678671    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:59:42.678671    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:42.678780    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:59:42.678975    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:59:42.679041    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:42.679041    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:59:45.467002    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:59:45.467002    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:45.468098    5884 sshutil.go:53] new ssh client: &{IP:172.28.162.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02\id_rsa Username:docker}
	I0719 05:59:45.493480    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:59:45.493480    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:45.494059    5884 sshutil.go:53] new ssh client: &{IP:172.28.162.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02\id_rsa Username:docker}
	I0719 05:59:45.558549    5884 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0719 05:59:45.559003    5884 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.2425101s)
	W0719 05:59:45.559116    5884 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 05:59:45.593539    5884 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0719 05:59:45.594307    5884 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2648552s)
	W0719 05:59:45.594412    5884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 05:59:45.606859    5884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 05:59:45.637616    5884 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0719 05:59:45.637702    5884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 05:59:45.637702    5884 start.go:495] detecting cgroup driver to use...
	I0719 05:59:45.638010    5884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0719 05:59:45.657791    5884 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 05:59:45.658237    5884 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 05:59:45.678348    5884 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0719 05:59:45.689712    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 05:59:45.724483    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 05:59:45.745891    5884 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 05:59:45.756738    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 05:59:45.792026    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 05:59:45.824702    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 05:59:45.855332    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 05:59:45.887087    5884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 05:59:45.916907    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 05:59:45.949875    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 05:59:45.982578    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 05:59:46.016856    5884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 05:59:46.036160    5884 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 05:59:46.049255    5884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 05:59:46.080316    5884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:59:46.276689    5884 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 05:59:46.312424    5884 start.go:495] detecting cgroup driver to use...
	I0719 05:59:46.325676    5884 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 05:59:46.350125    5884 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0719 05:59:46.350279    5884 command_runner.go:130] > [Unit]
	I0719 05:59:46.350279    5884 command_runner.go:130] > Description=Docker Application Container Engine
	I0719 05:59:46.350279    5884 command_runner.go:130] > Documentation=https://docs.docker.com
	I0719 05:59:46.350279    5884 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0719 05:59:46.350279    5884 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0719 05:59:46.350279    5884 command_runner.go:130] > StartLimitBurst=3
	I0719 05:59:46.350279    5884 command_runner.go:130] > StartLimitIntervalSec=60
	I0719 05:59:46.350279    5884 command_runner.go:130] > [Service]
	I0719 05:59:46.350279    5884 command_runner.go:130] > Type=notify
	I0719 05:59:46.350279    5884 command_runner.go:130] > Restart=on-failure
	I0719 05:59:46.350279    5884 command_runner.go:130] > Environment=NO_PROXY=172.28.162.149
	I0719 05:59:46.350279    5884 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0719 05:59:46.350279    5884 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0719 05:59:46.350279    5884 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0719 05:59:46.350279    5884 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0719 05:59:46.350279    5884 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0719 05:59:46.350279    5884 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0719 05:59:46.350279    5884 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0719 05:59:46.350279    5884 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0719 05:59:46.350279    5884 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0719 05:59:46.350279    5884 command_runner.go:130] > ExecStart=
	I0719 05:59:46.350279    5884 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0719 05:59:46.350279    5884 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0719 05:59:46.350279    5884 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0719 05:59:46.350279    5884 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0719 05:59:46.350279    5884 command_runner.go:130] > LimitNOFILE=infinity
	I0719 05:59:46.350279    5884 command_runner.go:130] > LimitNPROC=infinity
	I0719 05:59:46.350279    5884 command_runner.go:130] > LimitCORE=infinity
	I0719 05:59:46.350279    5884 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0719 05:59:46.350279    5884 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0719 05:59:46.350279    5884 command_runner.go:130] > TasksMax=infinity
	I0719 05:59:46.350279    5884 command_runner.go:130] > TimeoutStartSec=0
	I0719 05:59:46.350279    5884 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0719 05:59:46.350279    5884 command_runner.go:130] > Delegate=yes
	I0719 05:59:46.350279    5884 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0719 05:59:46.350279    5884 command_runner.go:130] > KillMode=process
	I0719 05:59:46.350279    5884 command_runner.go:130] > [Install]
	I0719 05:59:46.350279    5884 command_runner.go:130] > WantedBy=multi-user.target
	I0719 05:59:46.368276    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 05:59:46.405356    5884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 05:59:46.445316    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 05:59:46.483740    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 05:59:46.526536    5884 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 05:59:46.604602    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 05:59:46.628481    5884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 05:59:46.661586    5884 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0719 05:59:46.673631    5884 ssh_runner.go:195] Run: which cri-dockerd
	I0719 05:59:46.682735    5884 command_runner.go:130] > /usr/bin/cri-dockerd
	I0719 05:59:46.695342    5884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 05:59:46.713282    5884 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 05:59:46.756269    5884 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 05:59:46.969335    5884 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 05:59:47.160084    5884 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 05:59:47.160197    5884 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 05:59:47.206611    5884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:59:47.401056    5884 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 05:59:50.103981    5884 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7028923s)
	I0719 05:59:50.115909    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 05:59:50.151736    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 05:59:50.188581    5884 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 05:59:50.389292    5884 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 05:59:50.581727    5884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:59:50.776998    5884 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 05:59:50.824550    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 05:59:50.861360    5884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:59:51.061188    5884 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 05:59:51.172050    5884 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 05:59:51.184438    5884 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 05:59:51.194527    5884 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0719 05:59:51.194964    5884 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0719 05:59:51.194964    5884 command_runner.go:130] > Device: 0,22	Inode: 856         Links: 1
	I0719 05:59:51.194964    5884 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0719 05:59:51.194964    5884 command_runner.go:130] > Access: 2024-07-19 05:59:51.098436496 +0000
	I0719 05:59:51.194964    5884 command_runner.go:130] > Modify: 2024-07-19 05:59:51.098436496 +0000
	I0719 05:59:51.194964    5884 command_runner.go:130] > Change: 2024-07-19 05:59:51.102436544 +0000
	I0719 05:59:51.194964    5884 command_runner.go:130] >  Birth: -
	I0719 05:59:51.195090    5884 start.go:563] Will wait 60s for crictl version
	I0719 05:59:51.207421    5884 ssh_runner.go:195] Run: which crictl
	I0719 05:59:51.213864    5884 command_runner.go:130] > /usr/bin/crictl
	I0719 05:59:51.226361    5884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 05:59:51.279187    5884 command_runner.go:130] > Version:  0.1.0
	I0719 05:59:51.279269    5884 command_runner.go:130] > RuntimeName:  docker
	I0719 05:59:51.279269    5884 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0719 05:59:51.279269    5884 command_runner.go:130] > RuntimeApiVersion:  v1
	I0719 05:59:51.279269    5884 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 05:59:51.288593    5884 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 05:59:51.321417    5884 command_runner.go:130] > 27.0.3
	I0719 05:59:51.331404    5884 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 05:59:51.363963    5884 command_runner.go:130] > 27.0.3
	I0719 05:59:51.369890    5884 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0719 05:59:51.376939    5884 out.go:177]   - env NO_PROXY=172.28.162.149
	I0719 05:59:51.379017    5884 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0719 05:59:51.382628    5884 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0719 05:59:51.383674    5884 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0719 05:59:51.383735    5884 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0719 05:59:51.383735    5884 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:e9:18 Flags:up|broadcast|multicast|running}
	I0719 05:59:51.386885    5884 ip.go:210] interface addr: fe80::1dc5:162d:cec2:b9bd/64
	I0719 05:59:51.386948    5884 ip.go:210] interface addr: 172.28.160.1/20
	I0719 05:59:51.397418    5884 ssh_runner.go:195] Run: grep 172.28.160.1	host.minikube.internal$ /etc/hosts
	I0719 05:59:51.404366    5884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 05:59:51.426112    5884 mustload.go:65] Loading cluster: multinode-761300
	I0719 05:59:51.427086    5884 config.go:182] Loaded profile config "multinode-761300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:59:51.427652    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-761300" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-761300
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-761300: context deadline exceeded (0s)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-761300" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-761300	172.28.162.16
multinode-761300-m02	172.28.167.151
multinode-761300-m03	172.28.165.227

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-761300 -n multinode-761300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-761300 -n multinode-761300: (12.5460498s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 logs -n 25
E0719 06:00:10.174192    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 logs -n 25: (9.2240727s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-761300 cp testdata\cp-test.txt                                                                                 | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:45 UTC | 19 Jul 24 05:45 UTC |
	|         | multinode-761300-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-761300 ssh -n                                                                                                  | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:45 UTC | 19 Jul 24 05:45 UTC |
	|         | multinode-761300-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-761300 cp multinode-761300-m02:/home/docker/cp-test.txt                                                        | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:45 UTC | 19 Jul 24 05:45 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4110903034\001\cp-test_multinode-761300-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-761300 ssh -n                                                                                                  | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:45 UTC | 19 Jul 24 05:45 UTC |
	|         | multinode-761300-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-761300 cp multinode-761300-m02:/home/docker/cp-test.txt                                                        | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:45 UTC | 19 Jul 24 05:45 UTC |
	|         | multinode-761300:/home/docker/cp-test_multinode-761300-m02_multinode-761300.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-761300 ssh -n                                                                                                  | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:45 UTC | 19 Jul 24 05:46 UTC |
	|         | multinode-761300-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-761300 ssh -n multinode-761300 sudo cat                                                                        | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:46 UTC | 19 Jul 24 05:46 UTC |
	|         | /home/docker/cp-test_multinode-761300-m02_multinode-761300.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-761300 cp multinode-761300-m02:/home/docker/cp-test.txt                                                        | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:46 UTC | 19 Jul 24 05:46 UTC |
	|         | multinode-761300-m03:/home/docker/cp-test_multinode-761300-m02_multinode-761300-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-761300 ssh -n                                                                                                  | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:46 UTC | 19 Jul 24 05:46 UTC |
	|         | multinode-761300-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-761300 ssh -n multinode-761300-m03 sudo cat                                                                    | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:46 UTC | 19 Jul 24 05:46 UTC |
	|         | /home/docker/cp-test_multinode-761300-m02_multinode-761300-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-761300 cp testdata\cp-test.txt                                                                                 | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:46 UTC | 19 Jul 24 05:47 UTC |
	|         | multinode-761300-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-761300 ssh -n                                                                                                  | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:47 UTC | 19 Jul 24 05:47 UTC |
	|         | multinode-761300-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-761300 cp multinode-761300-m03:/home/docker/cp-test.txt                                                        | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:47 UTC | 19 Jul 24 05:47 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4110903034\001\cp-test_multinode-761300-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-761300 ssh -n                                                                                                  | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:47 UTC | 19 Jul 24 05:47 UTC |
	|         | multinode-761300-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-761300 cp multinode-761300-m03:/home/docker/cp-test.txt                                                        | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:47 UTC | 19 Jul 24 05:47 UTC |
	|         | multinode-761300:/home/docker/cp-test_multinode-761300-m03_multinode-761300.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-761300 ssh -n                                                                                                  | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:47 UTC | 19 Jul 24 05:47 UTC |
	|         | multinode-761300-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-761300 ssh -n multinode-761300 sudo cat                                                                        | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:47 UTC | 19 Jul 24 05:48 UTC |
	|         | /home/docker/cp-test_multinode-761300-m03_multinode-761300.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-761300 cp multinode-761300-m03:/home/docker/cp-test.txt                                                        | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:48 UTC | 19 Jul 24 05:48 UTC |
	|         | multinode-761300-m02:/home/docker/cp-test_multinode-761300-m03_multinode-761300-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-761300 ssh -n                                                                                                  | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:48 UTC | 19 Jul 24 05:48 UTC |
	|         | multinode-761300-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-761300 ssh -n multinode-761300-m02 sudo cat                                                                    | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:48 UTC | 19 Jul 24 05:48 UTC |
	|         | /home/docker/cp-test_multinode-761300-m03_multinode-761300-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-761300 node stop m03                                                                                           | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:48 UTC | 19 Jul 24 05:49 UTC |
	| node    | multinode-761300 node start                                                                                              | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:50 UTC | 19 Jul 24 05:52 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-761300                                                                                                 | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:53 UTC |                     |
	| stop    | -p multinode-761300                                                                                                      | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:53 UTC | 19 Jul 24 05:55 UTC |
	| start   | -p multinode-761300                                                                                                      | multinode-761300 | minikube6\jenkins | v1.33.1 | 19 Jul 24 05:55 UTC |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 05:55:04
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 05:55:04.012004    5884 out.go:291] Setting OutFile to fd 628 ...
	I0719 05:55:04.013047    5884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:55:04.013047    5884 out.go:304] Setting ErrFile to fd 508...
	I0719 05:55:04.013047    5884 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:55:04.035369    5884 out.go:298] Setting JSON to false
	I0719 05:55:04.041912    5884 start.go:129] hostinfo: {"hostname":"minikube6","uptime":27530,"bootTime":1721340973,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0719 05:55:04.042155    5884 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 05:55:04.089782    5884 out.go:177] * [multinode-761300] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 05:55:04.139392    5884 notify.go:220] Checking for updates...
	I0719 05:55:04.148458    5884 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 05:55:04.155665    5884 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 05:55:04.200939    5884 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0719 05:55:04.213634    5884 out.go:177]   - MINIKUBE_LOCATION=19302
	I0719 05:55:04.228706    5884 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 05:55:04.243036    5884 config.go:182] Loaded profile config "multinode-761300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:55:04.243518    5884 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 05:55:09.846694    5884 out.go:177] * Using the hyperv driver based on existing profile
	I0719 05:55:09.859013    5884 start.go:297] selected driver: hyperv
	I0719 05:55:09.859550    5884 start.go:901] validating driver "hyperv" against &{Name:multinode-761300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.3 ClusterName:multinode-761300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.162.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.167.151 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.165.227 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:55:09.859802    5884 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 05:55:09.911405    5884 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 05:55:09.911405    5884 cni.go:84] Creating CNI manager for ""
	I0719 05:55:09.911405    5884 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 05:55:09.911405    5884 start.go:340] cluster config:
	{Name:multinode-761300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-761300 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.162.16 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.167.151 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.165.227 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provision
er:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:55:09.912291    5884 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 05:55:10.004135    5884 out.go:177] * Starting "multinode-761300" primary control-plane node in "multinode-761300" cluster
	I0719 05:55:10.037823    5884 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 05:55:10.038532    5884 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 05:55:10.038532    5884 cache.go:56] Caching tarball of preloaded images
	I0719 05:55:10.039070    5884 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 05:55:10.039151    5884 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 05:55:10.039522    5884 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\config.json ...
	I0719 05:55:10.042784    5884 start.go:360] acquireMachinesLock for multinode-761300: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 05:55:10.043025    5884 start.go:364] duration metric: took 241µs to acquireMachinesLock for "multinode-761300"
	I0719 05:55:10.043196    5884 start.go:96] Skipping create...Using existing machine configuration
	I0719 05:55:10.043274    5884 fix.go:54] fixHost starting: 
	I0719 05:55:10.044020    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:55:12.841116    5884 main.go:141] libmachine: [stdout =====>] : Off
	
	I0719 05:55:12.841291    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:12.841291    5884 fix.go:112] recreateIfNeeded on multinode-761300: state=Stopped err=<nil>
	W0719 05:55:12.841291    5884 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 05:55:12.878737    5884 out.go:177] * Restarting existing hyperv VM for "multinode-761300" ...
	I0719 05:55:12.902031    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-761300
	I0719 05:55:15.981672    5884 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:55:15.981672    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:15.981672    5884 main.go:141] libmachine: Waiting for host to start...
	I0719 05:55:15.981672    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:55:18.279507    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:55:18.280440    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:18.280525    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:55:20.834982    5884 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:55:20.834982    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:21.836048    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:55:24.091133    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:55:24.091207    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:24.091381    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:55:26.667159    5884 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:55:26.667931    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:27.678397    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:55:29.920551    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:55:29.920640    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:29.920758    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:55:32.540674    5884 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:55:32.540674    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:33.552900    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:55:35.776889    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:55:35.777737    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:35.777737    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:55:38.350455    5884 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:55:38.350455    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:39.360647    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:55:41.645849    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:55:41.646075    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:41.646321    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:55:44.244611    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:55:44.245439    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:44.248355    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:55:46.401394    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:55:46.401394    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:46.402256    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:55:48.938183    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:55:48.938183    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:48.939442    5884 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\config.json ...
	I0719 05:55:48.941544    5884 machine.go:94] provisionDockerMachine start ...
	I0719 05:55:48.941544    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:55:51.097273    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:55:51.098091    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:51.098091    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:55:53.671346    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:55:53.671346    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:53.678441    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:55:53.678619    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.149 22 <nil> <nil>}
	I0719 05:55:53.679242    5884 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 05:55:53.812638    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 05:55:53.812741    5884 buildroot.go:166] provisioning hostname "multinode-761300"
	I0719 05:55:53.812741    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:55:55.993386    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:55:55.993386    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:55.994634    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:55:58.590472    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:55:58.590564    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:55:58.595734    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:55:58.596453    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.149 22 <nil> <nil>}
	I0719 05:55:58.596453    5884 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-761300 && echo "multinode-761300" | sudo tee /etc/hostname
	I0719 05:55:58.750725    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-761300
	
	I0719 05:55:58.750725    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:00.902162    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:00.902162    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:00.902454    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:03.476894    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:03.477691    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:03.482544    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:56:03.483220    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.149 22 <nil> <nil>}
	I0719 05:56:03.483220    5884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-761300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-761300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-761300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 05:56:03.628938    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 05:56:03.628938    5884 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 05:56:03.628938    5884 buildroot.go:174] setting up certificates
	I0719 05:56:03.628938    5884 provision.go:84] configureAuth start
	I0719 05:56:03.629546    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:05.777010    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:05.777509    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:05.777509    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:08.385330    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:08.385330    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:08.386425    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:10.551594    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:10.551594    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:10.552494    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:13.166121    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:13.166236    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:13.166236    5884 provision.go:143] copyHostCerts
	I0719 05:56:13.166236    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0719 05:56:13.166841    5884 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0719 05:56:13.167160    5884 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0719 05:56:13.167216    5884 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 05:56:13.168689    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0719 05:56:13.168689    5884 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0719 05:56:13.169258    5884 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0719 05:56:13.169546    5884 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 05:56:13.170461    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0719 05:56:13.170461    5884 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0719 05:56:13.170461    5884 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0719 05:56:13.171328    5884 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 05:56:13.172164    5884 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-761300 san=[127.0.0.1 172.28.162.149 localhost minikube multinode-761300]
	I0719 05:56:13.327115    5884 provision.go:177] copyRemoteCerts
	I0719 05:56:13.337113    5884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 05:56:13.337113    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:15.518132    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:15.518228    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:15.518368    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:18.097256    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:18.097256    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:18.098580    5884 sshutil.go:53] new ssh client: &{IP:172.28.162.149 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\id_rsa Username:docker}
	I0719 05:56:18.219353    5884 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8821808s)
	I0719 05:56:18.219353    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0719 05:56:18.221539    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 05:56:18.276256    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0719 05:56:18.276868    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 05:56:18.322300    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0719 05:56:18.322300    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0719 05:56:18.366948    5884 provision.go:87] duration metric: took 14.7377545s to configureAuth
	I0719 05:56:18.366948    5884 buildroot.go:189] setting minikube options for container-runtime
	I0719 05:56:18.367180    5884 config.go:182] Loaded profile config "multinode-761300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:56:18.367726    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:20.530074    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:20.530590    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:20.530642    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:23.128825    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:23.129130    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:23.134539    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:56:23.135413    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.149 22 <nil> <nil>}
	I0719 05:56:23.135413    5884 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 05:56:23.279177    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 05:56:23.279301    5884 buildroot.go:70] root file system type: tmpfs
	I0719 05:56:23.279508    5884 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 05:56:23.279645    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:25.452546    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:25.452956    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:25.453076    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:28.109487    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:28.110273    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:28.115826    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:56:28.116355    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.149 22 <nil> <nil>}
	I0719 05:56:28.116498    5884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 05:56:28.275934    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 05:56:28.275934    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:30.441674    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:30.442142    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:30.442305    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:33.041217    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:33.041527    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:33.046939    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:56:33.047835    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.149 22 <nil> <nil>}
	I0719 05:56:33.047835    5884 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 05:56:35.687480    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 05:56:35.687480    5884 machine.go:97] duration metric: took 46.7453656s to provisionDockerMachine
	I0719 05:56:35.687480    5884 start.go:293] postStartSetup for "multinode-761300" (driver="hyperv")
	I0719 05:56:35.687480    5884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 05:56:35.700688    5884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 05:56:35.700688    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:37.879518    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:37.879518    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:37.879518    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:40.441837    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:40.442661    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:40.443047    5884 sshutil.go:53] new ssh client: &{IP:172.28.162.149 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\id_rsa Username:docker}
	I0719 05:56:40.560330    5884 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8595826s)
	I0719 05:56:40.571842    5884 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 05:56:40.580930    5884 command_runner.go:130] > NAME=Buildroot
	I0719 05:56:40.581056    5884 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 05:56:40.581056    5884 command_runner.go:130] > ID=buildroot
	I0719 05:56:40.581056    5884 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 05:56:40.581056    5884 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 05:56:40.581169    5884 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 05:56:40.581169    5884 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 05:56:40.581332    5884 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 05:56:40.582494    5884 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
	I0719 05:56:40.582494    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /etc/ssl/certs/96042.pem
	I0719 05:56:40.594117    5884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 05:56:40.617336    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
	I0719 05:56:40.671179    5884 start.go:296] duration metric: took 4.9836388s for postStartSetup
	I0719 05:56:40.671179    5884 fix.go:56] duration metric: took 1m30.6267998s for fixHost
	I0719 05:56:40.671710    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:42.853227    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:42.853820    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:42.853820    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:45.416457    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:45.416457    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:45.421787    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:56:45.422517    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.149 22 <nil> <nil>}
	I0719 05:56:45.422517    5884 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 05:56:45.564540    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721368605.583208225
	
	I0719 05:56:45.564678    5884 fix.go:216] guest clock: 1721368605.583208225
	I0719 05:56:45.564678    5884 fix.go:229] Guest: 2024-07-19 05:56:45.583208225 +0000 UTC Remote: 2024-07-19 05:56:40.6711797 +0000 UTC m=+96.816773801 (delta=4.912028525s)
	I0719 05:56:45.564832    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:47.720562    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:47.721609    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:47.721675    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:50.321979    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:50.323051    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:50.328976    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:56:50.328976    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.149 22 <nil> <nil>}
	I0719 05:56:50.329553    5884 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721368605
	I0719 05:56:50.470273    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 05:56:45 UTC 2024
	
	I0719 05:56:50.471250    5884 fix.go:236] clock set: Fri Jul 19 05:56:45 UTC 2024
	 (err=<nil>)
	I0719 05:56:50.471250    5884 start.go:83] releasing machines lock for "multinode-761300", held for 1m40.4269993s
	I0719 05:56:50.471578    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:52.656363    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:52.657128    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:52.657230    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:55.228156    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:56:55.228365    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:55.232499    5884 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 05:56:55.232683    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:55.242086    5884 ssh_runner.go:195] Run: cat /version.json
	I0719 05:56:55.242086    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:56:57.488510    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:57.489025    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:57.489025    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:56:57.530920    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:56:57.531414    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:56:57.531414    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:57:00.207198    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:57:00.207356    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:57:00.207983    5884 sshutil.go:53] new ssh client: &{IP:172.28.162.149 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\id_rsa Username:docker}
	I0719 05:57:00.223549    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.149
	
	I0719 05:57:00.223549    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:57:00.224809    5884 sshutil.go:53] new ssh client: &{IP:172.28.162.149 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\id_rsa Username:docker}
	I0719 05:57:00.323649    5884 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0719 05:57:00.323858    5884 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0912145s)
	W0719 05:57:00.323981    5884 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 05:57:00.330643    5884 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0719 05:57:00.331258    5884 ssh_runner.go:235] Completed: cat /version.json: (5.08911s)
	I0719 05:57:00.342305    5884 ssh_runner.go:195] Run: systemctl --version
	I0719 05:57:00.355256    5884 command_runner.go:130] > systemd 252 (252)
	I0719 05:57:00.355256    5884 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0719 05:57:00.366437    5884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 05:57:00.372977    5884 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0719 05:57:00.374194    5884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 05:57:00.385391    5884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 05:57:00.411179    5884 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0719 05:57:00.412354    5884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 05:57:00.412354    5884 start.go:495] detecting cgroup driver to use...
	I0719 05:57:00.412636    5884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 05:57:00.447406    5884 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	W0719 05:57:00.456989    5884 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 05:57:00.456989    5884 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 05:57:00.461173    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 05:57:00.490802    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 05:57:00.510381    5884 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 05:57:00.521155    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 05:57:00.552635    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 05:57:00.583343    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 05:57:00.614391    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 05:57:00.644901    5884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 05:57:00.678828    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 05:57:00.708113    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 05:57:00.737686    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 05:57:00.767259    5884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 05:57:00.784169    5884 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 05:57:00.795118    5884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 05:57:00.823917    5884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:57:01.021816    5884 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 05:57:01.056105    5884 start.go:495] detecting cgroup driver to use...
	I0719 05:57:01.067016    5884 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 05:57:01.090659    5884 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0719 05:57:01.090659    5884 command_runner.go:130] > [Unit]
	I0719 05:57:01.090659    5884 command_runner.go:130] > Description=Docker Application Container Engine
	I0719 05:57:01.090659    5884 command_runner.go:130] > Documentation=https://docs.docker.com
	I0719 05:57:01.090659    5884 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0719 05:57:01.091595    5884 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0719 05:57:01.091595    5884 command_runner.go:130] > StartLimitBurst=3
	I0719 05:57:01.091595    5884 command_runner.go:130] > StartLimitIntervalSec=60
	I0719 05:57:01.091595    5884 command_runner.go:130] > [Service]
	I0719 05:57:01.091595    5884 command_runner.go:130] > Type=notify
	I0719 05:57:01.091595    5884 command_runner.go:130] > Restart=on-failure
	I0719 05:57:01.091595    5884 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0719 05:57:01.091650    5884 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0719 05:57:01.091650    5884 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0719 05:57:01.091650    5884 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0719 05:57:01.091650    5884 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0719 05:57:01.091710    5884 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0719 05:57:01.091710    5884 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0719 05:57:01.091710    5884 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0719 05:57:01.091758    5884 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0719 05:57:01.091758    5884 command_runner.go:130] > ExecStart=
	I0719 05:57:01.091758    5884 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0719 05:57:01.091855    5884 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0719 05:57:01.091909    5884 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0719 05:57:01.091909    5884 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0719 05:57:01.091909    5884 command_runner.go:130] > LimitNOFILE=infinity
	I0719 05:57:01.091909    5884 command_runner.go:130] > LimitNPROC=infinity
	I0719 05:57:01.091909    5884 command_runner.go:130] > LimitCORE=infinity
	I0719 05:57:01.091964    5884 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0719 05:57:01.091964    5884 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0719 05:57:01.091964    5884 command_runner.go:130] > TasksMax=infinity
	I0719 05:57:01.091964    5884 command_runner.go:130] > TimeoutStartSec=0
	I0719 05:57:01.092008    5884 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0719 05:57:01.092008    5884 command_runner.go:130] > Delegate=yes
	I0719 05:57:01.092008    5884 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0719 05:57:01.092008    5884 command_runner.go:130] > KillMode=process
	I0719 05:57:01.092069    5884 command_runner.go:130] > [Install]
	I0719 05:57:01.092069    5884 command_runner.go:130] > WantedBy=multi-user.target
	I0719 05:57:01.104670    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 05:57:01.136259    5884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 05:57:01.180293    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 05:57:01.212731    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 05:57:01.246856    5884 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 05:57:01.305602    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 05:57:01.329393    5884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 05:57:01.361042    5884 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0719 05:57:01.374651    5884 ssh_runner.go:195] Run: which cri-dockerd
	I0719 05:57:01.379812    5884 command_runner.go:130] > /usr/bin/cri-dockerd
	I0719 05:57:01.389806    5884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 05:57:01.406884    5884 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 05:57:01.450176    5884 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 05:57:01.656381    5884 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 05:57:01.847532    5884 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 05:57:01.847830    5884 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 05:57:01.893639    5884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:57:02.079606    5884 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 05:57:04.817363    5884 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7376388s)
	I0719 05:57:04.828768    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 05:57:04.862804    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 05:57:04.897712    5884 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 05:57:05.110751    5884 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 05:57:05.304762    5884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:57:05.506812    5884 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 05:57:05.547891    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 05:57:05.581496    5884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:57:05.784626    5884 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 05:57:05.892286    5884 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 05:57:05.903666    5884 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 05:57:05.912341    5884 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0719 05:57:05.912417    5884 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0719 05:57:05.912417    5884 command_runner.go:130] > Device: 0,22	Inode: 849         Links: 1
	I0719 05:57:05.912417    5884 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0719 05:57:05.912535    5884 command_runner.go:130] > Access: 2024-07-19 05:57:05.827743742 +0000
	I0719 05:57:05.912535    5884 command_runner.go:130] > Modify: 2024-07-19 05:57:05.827743742 +0000
	I0719 05:57:05.912535    5884 command_runner.go:130] > Change: 2024-07-19 05:57:05.830743752 +0000
	I0719 05:57:05.912535    5884 command_runner.go:130] >  Birth: -
	I0719 05:57:05.912933    5884 start.go:563] Will wait 60s for crictl version
	I0719 05:57:05.924316    5884 ssh_runner.go:195] Run: which crictl
	I0719 05:57:05.930116    5884 command_runner.go:130] > /usr/bin/crictl
	I0719 05:57:05.942056    5884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 05:57:06.001987    5884 command_runner.go:130] > Version:  0.1.0
	I0719 05:57:06.002668    5884 command_runner.go:130] > RuntimeName:  docker
	I0719 05:57:06.002668    5884 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0719 05:57:06.002668    5884 command_runner.go:130] > RuntimeApiVersion:  v1
	I0719 05:57:06.002668    5884 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 05:57:06.011349    5884 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 05:57:06.047340    5884 command_runner.go:130] > 27.0.3
	I0719 05:57:06.057071    5884 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 05:57:06.090218    5884 command_runner.go:130] > 27.0.3
	I0719 05:57:06.095835    5884 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0719 05:57:06.096006    5884 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0719 05:57:06.099449    5884 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0719 05:57:06.099449    5884 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0719 05:57:06.099449    5884 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0719 05:57:06.099449    5884 ip.go:207] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:7a:e9:18 Flags:up|broadcast|multicast|running}
	I0719 05:57:06.102682    5884 ip.go:210] interface addr: fe80::1dc5:162d:cec2:b9bd/64
	I0719 05:57:06.102682    5884 ip.go:210] interface addr: 172.28.160.1/20
	I0719 05:57:06.112018    5884 ssh_runner.go:195] Run: grep 172.28.160.1	host.minikube.internal$ /etc/hosts
	I0719 05:57:06.118791    5884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.28.160.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 05:57:06.141010    5884 kubeadm.go:883] updating cluster {Name:multinode-761300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:multinode-761300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.162.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.167.151 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.165.227 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0719 05:57:06.141434    5884 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 05:57:06.149380    5884 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 05:57:06.172941    5884 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0719 05:57:06.172941    5884 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0719 05:57:06.172941    5884 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0719 05:57:06.172941    5884 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0719 05:57:06.172941    5884 command_runner.go:130] > kindest/kindnetd:v20240715-585640e9
	I0719 05:57:06.172941    5884 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0719 05:57:06.172941    5884 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0719 05:57:06.172941    5884 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0719 05:57:06.172941    5884 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 05:57:06.172941    5884 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0719 05:57:06.172941    5884 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0719 05:57:06.172941    5884 docker.go:615] Images already preloaded, skipping extraction
	I0719 05:57:06.180941    5884 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 05:57:06.206061    5884 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0719 05:57:06.206156    5884 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0719 05:57:06.206204    5884 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0719 05:57:06.206204    5884 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0719 05:57:06.206204    5884 command_runner.go:130] > kindest/kindnetd:v20240715-585640e9
	I0719 05:57:06.206204    5884 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0719 05:57:06.206204    5884 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0719 05:57:06.206204    5884 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0719 05:57:06.206204    5884 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 05:57:06.206204    5884 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0719 05:57:06.206204    5884 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0719 05:57:06.206204    5884 cache_images.go:84] Images are preloaded, skipping loading
	I0719 05:57:06.206204    5884 kubeadm.go:934] updating node { 172.28.162.149 8443 v1.30.3 docker true true} ...
	I0719 05:57:06.206204    5884 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-761300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.28.162.149
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-761300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0719 05:57:06.219163    5884 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 05:57:06.253142    5884 command_runner.go:130] > cgroupfs
	I0719 05:57:06.254150    5884 cni.go:84] Creating CNI manager for ""
	I0719 05:57:06.254150    5884 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 05:57:06.254150    5884 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0719 05:57:06.254150    5884 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.28.162.149 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-761300 NodeName:multinode-761300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.28.162.149"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.28.162.149 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 05:57:06.254150    5884 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.28.162.149
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-761300"
	  kubeletExtraArgs:
	    node-ip: 172.28.162.149
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.28.162.149"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 05:57:06.264135    5884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0719 05:57:06.284370    5884 command_runner.go:130] > kubeadm
	I0719 05:57:06.284370    5884 command_runner.go:130] > kubectl
	I0719 05:57:06.284370    5884 command_runner.go:130] > kubelet
	I0719 05:57:06.284370    5884 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 05:57:06.296526    5884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 05:57:06.313231    5884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0719 05:57:06.346776    5884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 05:57:06.379170    5884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I0719 05:57:06.422338    5884 ssh_runner.go:195] Run: grep 172.28.162.149	control-plane.minikube.internal$ /etc/hosts
	I0719 05:57:06.428037    5884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.28.162.149	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 05:57:06.459893    5884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:57:06.649764    5884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 05:57:06.679700    5884 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300 for IP: 172.28.162.149
	I0719 05:57:06.679700    5884 certs.go:194] generating shared ca certs ...
	I0719 05:57:06.679700    5884 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:57:06.680692    5884 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0719 05:57:06.680692    5884 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0719 05:57:06.680692    5884 certs.go:256] generating profile certs ...
	I0719 05:57:06.681699    5884 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\client.key
	I0719 05:57:06.681699    5884 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key.f844f9b5
	I0719 05:57:06.681699    5884 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt.f844f9b5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.28.162.149]
	I0719 05:57:06.860967    5884 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt.f844f9b5 ...
	I0719 05:57:06.860967    5884 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt.f844f9b5: {Name:mk4dec42bb748b9416840ede947ad20260cdef70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:57:06.862282    5884 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key.f844f9b5 ...
	I0719 05:57:06.862282    5884 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key.f844f9b5: {Name:mk8888d555d0e90d859c52eb64eaa2d1defffc7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:57:06.863082    5884 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt.f844f9b5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt
	I0719 05:57:06.876221    5884 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key.f844f9b5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key
	I0719 05:57:06.876452    5884 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\proxy-client.key
	I0719 05:57:06.877520    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 05:57:06.877710    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0719 05:57:06.877956    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 05:57:06.877956    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 05:57:06.877956    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 05:57:06.877956    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 05:57:06.877956    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 05:57:06.877956    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 05:57:06.879151    5884 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem (1338 bytes)
	W0719 05:57:06.879479    5884 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604_empty.pem, impossibly tiny 0 bytes
	I0719 05:57:06.879479    5884 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I0719 05:57:06.879888    5884 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0719 05:57:06.880125    5884 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0719 05:57:06.880125    5884 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0719 05:57:06.880864    5884 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem (1708 bytes)
	I0719 05:57:06.880864    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:57:06.881586    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem -> /usr/share/ca-certificates/9604.pem
	I0719 05:57:06.881586    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /usr/share/ca-certificates/96042.pem
	I0719 05:57:06.882836    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 05:57:06.932759    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0719 05:57:06.982409    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 05:57:07.035423    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 05:57:07.088709    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0719 05:57:07.136475    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 05:57:07.187229    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 05:57:07.234697    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 05:57:07.282826    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 05:57:07.334676    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\9604.pem --> /usr/share/ca-certificates/9604.pem (1338 bytes)
	I0719 05:57:07.380706    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /usr/share/ca-certificates/96042.pem (1708 bytes)
	I0719 05:57:07.425333    5884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 05:57:07.469814    5884 ssh_runner.go:195] Run: openssl version
	I0719 05:57:07.477488    5884 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0719 05:57:07.488949    5884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 05:57:07.518453    5884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:57:07.525039    5884 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 19 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:57:07.525039    5884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:30 /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:57:07.535029    5884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 05:57:07.543601    5884 command_runner.go:130] > b5213941
	I0719 05:57:07.553760    5884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 05:57:07.587513    5884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9604.pem && ln -fs /usr/share/ca-certificates/9604.pem /etc/ssl/certs/9604.pem"
	I0719 05:57:07.619449    5884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9604.pem
	I0719 05:57:07.627041    5884 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 19 03:46 /usr/share/ca-certificates/9604.pem
	I0719 05:57:07.627177    5884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:46 /usr/share/ca-certificates/9604.pem
	I0719 05:57:07.639051    5884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9604.pem
	I0719 05:57:07.649028    5884 command_runner.go:130] > 51391683
	I0719 05:57:07.661758    5884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9604.pem /etc/ssl/certs/51391683.0"
	I0719 05:57:07.693704    5884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/96042.pem && ln -fs /usr/share/ca-certificates/96042.pem /etc/ssl/certs/96042.pem"
	I0719 05:57:07.723575    5884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/96042.pem
	I0719 05:57:07.730741    5884 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 19 03:46 /usr/share/ca-certificates/96042.pem
	I0719 05:57:07.730741    5884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:46 /usr/share/ca-certificates/96042.pem
	I0719 05:57:07.742020    5884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/96042.pem
	I0719 05:57:07.751157    5884 command_runner.go:130] > 3ec20f2e
	I0719 05:57:07.761641    5884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/96042.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 05:57:07.793320    5884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 05:57:07.807103    5884 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0719 05:57:07.807103    5884 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0719 05:57:07.807103    5884 command_runner.go:130] > Device: 8,1	Inode: 6290258     Links: 1
	I0719 05:57:07.807103    5884 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 05:57:07.807103    5884 command_runner.go:130] > Access: 2024-07-19 05:32:50.038998983 +0000
	I0719 05:57:07.807103    5884 command_runner.go:130] > Modify: 2024-07-19 05:32:50.038998983 +0000
	I0719 05:57:07.807103    5884 command_runner.go:130] > Change: 2024-07-19 05:32:50.038998983 +0000
	I0719 05:57:07.807257    5884 command_runner.go:130] >  Birth: 2024-07-19 05:32:50.038998983 +0000
	I0719 05:57:07.820480    5884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0719 05:57:07.830552    5884 command_runner.go:130] > Certificate will not expire
	I0719 05:57:07.842273    5884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0719 05:57:07.855259    5884 command_runner.go:130] > Certificate will not expire
	I0719 05:57:07.867218    5884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0719 05:57:07.877558    5884 command_runner.go:130] > Certificate will not expire
	I0719 05:57:07.889242    5884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0719 05:57:07.899304    5884 command_runner.go:130] > Certificate will not expire
	I0719 05:57:07.911728    5884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0719 05:57:07.920732    5884 command_runner.go:130] > Certificate will not expire
	I0719 05:57:07.932948    5884 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0719 05:57:07.941281    5884 command_runner.go:130] > Certificate will not expire
	I0719 05:57:07.941342    5884 kubeadm.go:392] StartCluster: {Name:multinode-761300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:multinode-761300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.28.162.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.28.167.151 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.28.165.227 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 05:57:07.951099    5884 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 05:57:07.987212    5884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 05:57:08.004024    5884 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0719 05:57:08.004024    5884 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0719 05:57:08.004024    5884 command_runner.go:130] > /var/lib/minikube/etcd:
	I0719 05:57:08.004593    5884 command_runner.go:130] > member
	I0719 05:57:08.005094    5884 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0719 05:57:08.005192    5884 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0719 05:57:08.015846    5884 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0719 05:57:08.034462    5884 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0719 05:57:08.035942    5884 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-761300" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 05:57:08.036650    5884 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-761300" cluster setting kubeconfig missing "multinode-761300" context setting]
	I0719 05:57:08.037586    5884 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:57:08.052628    5884 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 05:57:08.053515    5884 kapi.go:59] client config for multinode-761300: &rest.Config{Host:"https://172.28.162.149:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-761300/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-761300/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1ef5e20), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 05:57:08.055343    5884 cert_rotation.go:137] Starting client certificate rotation controller
	I0719 05:57:08.067708    5884 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0719 05:57:08.086154    5884 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0719 05:57:08.086154    5884 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0719 05:57:08.086154    5884 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0719 05:57:08.086154    5884 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0719 05:57:08.086315    5884 command_runner.go:130] >  kind: InitConfiguration
	I0719 05:57:08.086315    5884 command_runner.go:130] >  localAPIEndpoint:
	I0719 05:57:08.086315    5884 command_runner.go:130] > -  advertiseAddress: 172.28.162.16
	I0719 05:57:08.086315    5884 command_runner.go:130] > +  advertiseAddress: 172.28.162.149
	I0719 05:57:08.086315    5884 command_runner.go:130] >    bindPort: 8443
	I0719 05:57:08.086315    5884 command_runner.go:130] >  bootstrapTokens:
	I0719 05:57:08.086315    5884 command_runner.go:130] >    - groups:
	I0719 05:57:08.086315    5884 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0719 05:57:08.086315    5884 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0719 05:57:08.086315    5884 command_runner.go:130] >    name: "multinode-761300"
	I0719 05:57:08.086501    5884 command_runner.go:130] >    kubeletExtraArgs:
	I0719 05:57:08.086501    5884 command_runner.go:130] > -    node-ip: 172.28.162.16
	I0719 05:57:08.086501    5884 command_runner.go:130] > +    node-ip: 172.28.162.149
	I0719 05:57:08.086501    5884 command_runner.go:130] >    taints: []
	I0719 05:57:08.086501    5884 command_runner.go:130] >  ---
	I0719 05:57:08.086566    5884 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0719 05:57:08.086616    5884 command_runner.go:130] >  kind: ClusterConfiguration
	I0719 05:57:08.086690    5884 command_runner.go:130] >  apiServer:
	I0719 05:57:08.086747    5884 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.28.162.16"]
	I0719 05:57:08.086747    5884 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.28.162.149"]
	I0719 05:57:08.086747    5884 command_runner.go:130] >    extraArgs:
	I0719 05:57:08.086775    5884 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0719 05:57:08.086819    5884 command_runner.go:130] >  controllerManager:
	I0719 05:57:08.086886    5884 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.28.162.16
	+  advertiseAddress: 172.28.162.149
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-761300"
	   kubeletExtraArgs:
	-    node-ip: 172.28.162.16
	+    node-ip: 172.28.162.149
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.28.162.16"]
	+  certSANs: ["127.0.0.1", "localhost", "172.28.162.149"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0719 05:57:08.086933    5884 kubeadm.go:1160] stopping kube-system containers ...
	I0719 05:57:08.096093    5884 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 05:57:08.126228    5884 command_runner.go:130] > 17479f193bde
	I0719 05:57:08.126326    5884 command_runner.go:130] > 7992ac3e3292
	I0719 05:57:08.126326    5884 command_runner.go:130] > 2db86aab06c2
	I0719 05:57:08.126326    5884 command_runner.go:130] > 8880cece050b
	I0719 05:57:08.126326    5884 command_runner.go:130] > 81297ef97ccf
	I0719 05:57:08.126326    5884 command_runner.go:130] > c7f3e45f7ac5
	I0719 05:57:08.126326    5884 command_runner.go:130] > 605bd6887ea9
	I0719 05:57:08.126411    5884 command_runner.go:130] > 342774c2cfe8
	I0719 05:57:08.126411    5884 command_runner.go:130] > 1e25c1f162f5
	I0719 05:57:08.126411    5884 command_runner.go:130] > 86b38e87981e
	I0719 05:57:08.126411    5884 command_runner.go:130] > d59292a30318
	I0719 05:57:08.126411    5884 command_runner.go:130] > d8ebf4b1a3d9
	I0719 05:57:08.126411    5884 command_runner.go:130] > b8966b015c45
	I0719 05:57:08.126411    5884 command_runner.go:130] > 20495b8d4837
	I0719 05:57:08.126411    5884 command_runner.go:130] > 9afe226cce24
	I0719 05:57:08.126476    5884 command_runner.go:130] > 44cdc617bc65
	I0719 05:57:08.126539    5884 docker.go:483] Stopping containers: [17479f193bde 7992ac3e3292 2db86aab06c2 8880cece050b 81297ef97ccf c7f3e45f7ac5 605bd6887ea9 342774c2cfe8 1e25c1f162f5 86b38e87981e d59292a30318 d8ebf4b1a3d9 b8966b015c45 20495b8d4837 9afe226cce24 44cdc617bc65]
	I0719 05:57:08.135388    5884 ssh_runner.go:195] Run: docker stop 17479f193bde 7992ac3e3292 2db86aab06c2 8880cece050b 81297ef97ccf c7f3e45f7ac5 605bd6887ea9 342774c2cfe8 1e25c1f162f5 86b38e87981e d59292a30318 d8ebf4b1a3d9 b8966b015c45 20495b8d4837 9afe226cce24 44cdc617bc65
	I0719 05:57:08.163424    5884 command_runner.go:130] > 17479f193bde
	I0719 05:57:08.163424    5884 command_runner.go:130] > 7992ac3e3292
	I0719 05:57:08.163424    5884 command_runner.go:130] > 2db86aab06c2
	I0719 05:57:08.163496    5884 command_runner.go:130] > 8880cece050b
	I0719 05:57:08.163496    5884 command_runner.go:130] > 81297ef97ccf
	I0719 05:57:08.163496    5884 command_runner.go:130] > c7f3e45f7ac5
	I0719 05:57:08.163496    5884 command_runner.go:130] > 605bd6887ea9
	I0719 05:57:08.163496    5884 command_runner.go:130] > 342774c2cfe8
	I0719 05:57:08.163496    5884 command_runner.go:130] > 1e25c1f162f5
	I0719 05:57:08.163496    5884 command_runner.go:130] > 86b38e87981e
	I0719 05:57:08.163496    5884 command_runner.go:130] > d59292a30318
	I0719 05:57:08.163496    5884 command_runner.go:130] > d8ebf4b1a3d9
	I0719 05:57:08.163496    5884 command_runner.go:130] > b8966b015c45
	I0719 05:57:08.163496    5884 command_runner.go:130] > 20495b8d4837
	I0719 05:57:08.163654    5884 command_runner.go:130] > 9afe226cce24
	I0719 05:57:08.163654    5884 command_runner.go:130] > 44cdc617bc65
	I0719 05:57:08.174904    5884 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0719 05:57:08.213199    5884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 05:57:08.231381    5884 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0719 05:57:08.231885    5884 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0719 05:57:08.231885    5884 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0719 05:57:08.231936    5884 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 05:57:08.232234    5884 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 05:57:08.232329    5884 kubeadm.go:157] found existing configuration files:
	
	I0719 05:57:08.244669    5884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0719 05:57:08.263550    5884 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 05:57:08.263645    5884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0719 05:57:08.274740    5884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0719 05:57:08.304203    5884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0719 05:57:08.320371    5884 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 05:57:08.320421    5884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0719 05:57:08.331195    5884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0719 05:57:08.360031    5884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0719 05:57:08.376206    5884 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 05:57:08.376266    5884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0719 05:57:08.388028    5884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0719 05:57:08.415568    5884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0719 05:57:08.431490    5884 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 05:57:08.432368    5884 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0719 05:57:08.443052    5884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0719 05:57:08.471580    5884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 05:57:08.500902    5884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 05:57:08.815246    5884 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 05:57:08.815246    5884 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0719 05:57:08.815246    5884 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0719 05:57:08.815246    5884 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0719 05:57:08.815246    5884 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0719 05:57:08.815246    5884 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0719 05:57:08.815246    5884 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0719 05:57:08.815354    5884 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0719 05:57:08.815354    5884 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0719 05:57:08.815354    5884 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0719 05:57:08.815466    5884 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0719 05:57:08.815466    5884 command_runner.go:130] > [certs] Using the existing "sa" key
	I0719 05:57:08.815528    5884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 05:57:10.014579    5884 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 05:57:10.014579    5884 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 05:57:10.014579    5884 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0719 05:57:10.014579    5884 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 05:57:10.014579    5884 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 05:57:10.014579    5884 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 05:57:10.014579    5884 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1990362s)
	I0719 05:57:10.014579    5884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0719 05:57:10.330906    5884 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 05:57:10.330906    5884 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 05:57:10.330906    5884 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0719 05:57:10.330906    5884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 05:57:10.419159    5884 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 05:57:10.419955    5884 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 05:57:10.419955    5884 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 05:57:10.419955    5884 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 05:57:10.420069    5884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0719 05:57:10.551363    5884 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 05:57:10.552757    5884 api_server.go:52] waiting for apiserver process to appear ...
	I0719 05:57:10.565940    5884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:57:11.069679    5884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:57:11.574690    5884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:57:12.070556    5884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:57:12.580934    5884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:57:12.609000    5884 command_runner.go:130] > 1971
	I0719 05:57:12.609599    5884 api_server.go:72] duration metric: took 2.056866s to wait for apiserver process to appear ...
	I0719 05:57:12.609690    5884 api_server.go:88] waiting for apiserver healthz status ...
	I0719 05:57:12.609690    5884 api_server.go:253] Checking apiserver healthz at https://172.28.162.149:8443/healthz ...
	I0719 05:57:15.492826    5884 api_server.go:279] https://172.28.162.149:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 05:57:15.493473    5884 api_server.go:103] status: https://172.28.162.149:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 05:57:15.493473    5884 api_server.go:253] Checking apiserver healthz at https://172.28.162.149:8443/healthz ...
	I0719 05:57:15.529623    5884 api_server.go:279] https://172.28.162.149:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0719 05:57:15.529623    5884 api_server.go:103] status: https://172.28.162.149:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0719 05:57:15.619842    5884 api_server.go:253] Checking apiserver healthz at https://172.28.162.149:8443/healthz ...
	I0719 05:57:15.629806    5884 api_server.go:279] https://172.28.162.149:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:57:15.629806    5884 api_server.go:103] status: https://172.28.162.149:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:57:16.124853    5884 api_server.go:253] Checking apiserver healthz at https://172.28.162.149:8443/healthz ...
	I0719 05:57:16.133697    5884 api_server.go:279] https://172.28.162.149:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:57:16.133697    5884 api_server.go:103] status: https://172.28.162.149:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:57:16.610826    5884 api_server.go:253] Checking apiserver healthz at https://172.28.162.149:8443/healthz ...
	I0719 05:57:16.641800    5884 api_server.go:279] https://172.28.162.149:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0719 05:57:16.641800    5884 api_server.go:103] status: https://172.28.162.149:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0719 05:57:17.120724    5884 api_server.go:253] Checking apiserver healthz at https://172.28.162.149:8443/healthz ...
	I0719 05:57:17.128274    5884 api_server.go:279] https://172.28.162.149:8443/healthz returned 200:
	ok
	I0719 05:57:17.128274    5884 round_trippers.go:463] GET https://172.28.162.149:8443/version
	I0719 05:57:17.128274    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:17.128274    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:17.128274    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:17.140478    5884 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0719 05:57:17.140530    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:17.140530    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:17.140530    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:17.140530    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:17.140530    5884 round_trippers.go:580]     Content-Length: 263
	I0719 05:57:17.140530    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:17 GMT
	I0719 05:57:17.140530    5884 round_trippers.go:580]     Audit-Id: 9de0256c-9477-49c5-af84-d61c7c2056bd
	I0719 05:57:17.140530    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:17.140643    5884 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0719 05:57:17.140780    5884 api_server.go:141] control plane version: v1.30.3
	I0719 05:57:17.140855    5884 api_server.go:131] duration metric: took 4.5310343s to wait for apiserver health ...
	I0719 05:57:17.140855    5884 cni.go:84] Creating CNI manager for ""
	I0719 05:57:17.140855    5884 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0719 05:57:17.145288    5884 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0719 05:57:17.160458    5884 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0719 05:57:17.172590    5884 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0719 05:57:17.172696    5884 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0719 05:57:17.172696    5884 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0719 05:57:17.172696    5884 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0719 05:57:17.172696    5884 command_runner.go:130] > Access: 2024-07-19 05:55:40.547944400 +0000
	I0719 05:57:17.172696    5884 command_runner.go:130] > Modify: 2024-07-18 23:04:21.000000000 +0000
	I0719 05:57:17.172847    5884 command_runner.go:130] > Change: 2024-07-19 05:55:31.647000000 +0000
	I0719 05:57:17.172847    5884 command_runner.go:130] >  Birth: -
	I0719 05:57:17.173733    5884 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0719 05:57:17.173733    5884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0719 05:57:17.236238    5884 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0719 05:57:18.790984    5884 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0719 05:57:18.791711    5884 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0719 05:57:18.791711    5884 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0719 05:57:18.791711    5884 command_runner.go:130] > daemonset.apps/kindnet configured
	I0719 05:57:18.791850    5884 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.5555935s)
	I0719 05:57:18.792029    5884 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 05:57:18.792460    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods
	I0719 05:57:18.792557    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:18.792557    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:18.792557    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:18.799163    5884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 05:57:18.799163    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:18.799163    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:18.799163    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:18.799163    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:18.799163    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:18 GMT
	I0719 05:57:18.799163    5884 round_trippers.go:580]     Audit-Id: 2d62b38e-4352-41fe-b558-1a503cc6dc45
	I0719 05:57:18.799163    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:18.801165    5884 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1882"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 88240 chars]
	I0719 05:57:18.807142    5884 system_pods.go:59] 12 kube-system pods found
	I0719 05:57:18.807142    5884 system_pods.go:61] "coredns-7db6d8ff4d-hw9kh" [d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0719 05:57:18.807142    5884 system_pods.go:61] "etcd-multinode-761300" [296a455d-9236-4939-b002-5fa6dd843880] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0719 05:57:18.807142    5884 system_pods.go:61] "kindnet-22ts9" [0d3c5a3b-fa22-4542-b9a5-478056ccc9cc] Running
	I0719 05:57:18.807142    5884 system_pods.go:61] "kindnet-6wxhn" [c0859b76-8ace-4de2-a940-4344594c5d27] Running
	I0719 05:57:18.807142    5884 system_pods.go:61] "kindnet-dj497" [124722d1-6c9c-4de4-b242-2f58e89b223b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0719 05:57:18.807142    5884 system_pods.go:61] "kube-apiserver-multinode-761300" [89d493c7-c827-467c-ae64-9cdb2b5061df] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0719 05:57:18.807142    5884 system_pods.go:61] "kube-controller-manager-multinode-761300" [2124834c-1961-49fb-8699-fba2fc5dd0ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0719 05:57:18.807142    5884 system_pods.go:61] "kube-proxy-c48b9" [67e2ee42-a2c4-4ed1-a2bf-840702a255b4] Running
	I0719 05:57:18.807142    5884 system_pods.go:61] "kube-proxy-c4z7f" [17ff8aac-2d57-44fb-a3ec-f0d6ea181881] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0719 05:57:18.807142    5884 system_pods.go:61] "kube-proxy-mjv8l" [4d0f7d34-4031-46d3-a580-a2d080d9d335] Running
	I0719 05:57:18.807142    5884 system_pods.go:61] "kube-scheduler-multinode-761300" [49a739d1-1ae3-4a41-aebc-0eb7b2b4f242] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0719 05:57:18.807142    5884 system_pods.go:61] "storage-provisioner" [87c864ea-0853-481c-ab24-2ab209760f69] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 05:57:18.807142    5884 system_pods.go:74] duration metric: took 15.113ms to wait for pod list to return data ...
	I0719 05:57:18.807142    5884 node_conditions.go:102] verifying NodePressure condition ...
	I0719 05:57:18.807142    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes
	I0719 05:57:18.807142    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:18.807142    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:18.807142    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:18.811495    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:18.811495    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:18.811495    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:18.811495    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:18.811495    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:18.811495    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:18.811495    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:18 GMT
	I0719 05:57:18.811495    5884 round_trippers.go:580]     Audit-Id: 02671730-14c8-4372-a07e-fed9482525db
	I0719 05:57:18.812151    5884 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1882"},"items":[{"metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16290 chars]
	I0719 05:57:18.813109    5884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 05:57:18.813109    5884 node_conditions.go:123] node cpu capacity is 2
	I0719 05:57:18.813109    5884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 05:57:18.813109    5884 node_conditions.go:123] node cpu capacity is 2
	I0719 05:57:18.813109    5884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 05:57:18.813109    5884 node_conditions.go:123] node cpu capacity is 2
	I0719 05:57:18.813109    5884 node_conditions.go:105] duration metric: took 5.9668ms to run NodePressure ...
	I0719 05:57:18.813109    5884 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0719 05:57:19.055300    5884 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0719 05:57:19.152930    5884 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0719 05:57:19.154738    5884 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0719 05:57:19.155750    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0719 05:57:19.155750    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.155750    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.155750    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.160793    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:19.160793    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.160793    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.160793    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.160793    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.160793    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.160793    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.160793    5884 round_trippers.go:580]     Audit-Id: 4083154a-87b4-44c1-9990-96caec4db871
	I0719 05:57:19.161738    5884 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1884"},"items":[{"metadata":{"name":"etcd-multinode-761300","namespace":"kube-system","uid":"296a455d-9236-4939-b002-5fa6dd843880","resourceVersion":"1813","creationTimestamp":"2024-07-19T05:57:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.162.149:2379","kubernetes.io/config.hash":"581155a4bfbbdcf98e106c8ce8e86c2b","kubernetes.io/config.mirror":"581155a4bfbbdcf98e106c8ce8e86c2b","kubernetes.io/config.seen":"2024-07-19T05:57:10.588894693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:57:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 30563 chars]
	I0719 05:57:19.163727    5884 kubeadm.go:739] kubelet initialised
	I0719 05:57:19.163727    5884 kubeadm.go:740] duration metric: took 7.9763ms waiting for restarted kubelet to initialise ...
	I0719 05:57:19.163727    5884 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:57:19.163727    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods
	I0719 05:57:19.163727    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.163727    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.163727    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.177908    5884 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0719 05:57:19.178000    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.178000    5884 round_trippers.go:580]     Audit-Id: 17151b11-a60a-469b-a29f-72e4627cf28c
	I0719 05:57:19.178000    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.178090    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.178090    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.178090    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.178090    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.179809    5884 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1884"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 88240 chars]
	I0719 05:57:19.184675    5884 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:19.184675    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:19.184675    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.184675    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.184675    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.187126    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:19.187126    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.187126    5884 round_trippers.go:580]     Audit-Id: dab98f20-8223-49e0-9e1f-34642024fe26
	I0719 05:57:19.187126    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.187126    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.187126    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.187126    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.187126    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.187126    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:19.188130    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:19.188130    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.188130    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.188130    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.195125    5884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 05:57:19.195125    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.195125    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.195125    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.195125    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.195125    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.195125    5884 round_trippers.go:580]     Audit-Id: 66c5aa30-e5d3-4f2f-aab0-1af898c8a4f6
	I0719 05:57:19.195125    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.196122    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:19.196122    5884 pod_ready.go:97] node "multinode-761300" hosting pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:19.196122    5884 pod_ready.go:81] duration metric: took 11.4468ms for pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace to be "Ready" ...
	E0719 05:57:19.196122    5884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-761300" hosting pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:19.196122    5884 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:19.196122    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-761300
	I0719 05:57:19.196122    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.196122    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.196122    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.200136    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:19.200164    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.200164    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.200164    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.200164    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.200164    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.200164    5884 round_trippers.go:580]     Audit-Id: 4810ddb7-492a-431d-a365-465b8975d528
	I0719 05:57:19.200164    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.200164    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-761300","namespace":"kube-system","uid":"296a455d-9236-4939-b002-5fa6dd843880","resourceVersion":"1813","creationTimestamp":"2024-07-19T05:57:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.162.149:2379","kubernetes.io/config.hash":"581155a4bfbbdcf98e106c8ce8e86c2b","kubernetes.io/config.mirror":"581155a4bfbbdcf98e106c8ce8e86c2b","kubernetes.io/config.seen":"2024-07-19T05:57:10.588894693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:57:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6395 chars]
	I0719 05:57:19.201092    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:19.201092    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.201092    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.201092    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.203789    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:19.203789    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.203789    5884 round_trippers.go:580]     Audit-Id: 63504cec-7f02-44bb-9d08-40515bc2db7b
	I0719 05:57:19.203789    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.203789    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.203789    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.203789    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.203789    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.203789    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:19.204754    5884 pod_ready.go:97] node "multinode-761300" hosting pod "etcd-multinode-761300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:19.204754    5884 pod_ready.go:81] duration metric: took 8.6311ms for pod "etcd-multinode-761300" in "kube-system" namespace to be "Ready" ...
	E0719 05:57:19.204754    5884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-761300" hosting pod "etcd-multinode-761300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:19.204754    5884 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:19.204754    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-761300
	I0719 05:57:19.204754    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.204754    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.204754    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.207759    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:19.207759    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.207759    5884 round_trippers.go:580]     Audit-Id: 88ee6419-9e67-4312-b883-a6cfc037cc52
	I0719 05:57:19.207759    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.207759    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.207759    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.207759    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.207759    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.207759    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-761300","namespace":"kube-system","uid":"89d493c7-c827-467c-ae64-9cdb2b5061df","resourceVersion":"1814","creationTimestamp":"2024-07-19T05:57:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.162.149:8443","kubernetes.io/config.hash":"b21ce007ca118b4c86324a165dd45eec","kubernetes.io/config.mirror":"b21ce007ca118b4c86324a165dd45eec","kubernetes.io/config.seen":"2024-07-19T05:57:10.501200307Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:57:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7949 chars]
	I0719 05:57:19.208525    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:19.208525    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.208525    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.208525    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.211131    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:19.211131    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.211131    5884 round_trippers.go:580]     Audit-Id: 1066d24f-ba52-4d3a-9a2d-7d5a5d84044b
	I0719 05:57:19.211131    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.211131    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.211131    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.211131    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.211131    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.212235    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:19.212696    5884 pod_ready.go:97] node "multinode-761300" hosting pod "kube-apiserver-multinode-761300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:19.212750    5884 pod_ready.go:81] duration metric: took 7.942ms for pod "kube-apiserver-multinode-761300" in "kube-system" namespace to be "Ready" ...
	E0719 05:57:19.212750    5884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-761300" hosting pod "kube-apiserver-multinode-761300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:19.212750    5884 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:19.212808    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-761300
	I0719 05:57:19.212808    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.212886    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.212909    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.216532    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:19.216532    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.216532    5884 round_trippers.go:580]     Audit-Id: 06242767-62d5-410c-a42d-97672ebc95c5
	I0719 05:57:19.216532    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.216532    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.216532    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.216532    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.216532    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.217137    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-761300","namespace":"kube-system","uid":"2124834c-1961-49fb-8699-fba2fc5dd0ac","resourceVersion":"1811","creationTimestamp":"2024-07-19T05:33:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"91d2984bea90586f6ba6d94e358920eb","kubernetes.io/config.mirror":"91d2984bea90586f6ba6d94e358920eb","kubernetes.io/config.seen":"2024-07-19T05:33:02.001207967Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0719 05:57:19.217446    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:19.217446    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.217446    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.217446    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.220031    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:19.220659    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.220659    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.220659    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.220659    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.220659    5884 round_trippers.go:580]     Audit-Id: 41f043a0-30ae-4579-a973-858f3ab325dd
	I0719 05:57:19.220659    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.220659    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.221033    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:19.221456    5884 pod_ready.go:97] node "multinode-761300" hosting pod "kube-controller-manager-multinode-761300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:19.221456    5884 pod_ready.go:81] duration metric: took 8.7057ms for pod "kube-controller-manager-multinode-761300" in "kube-system" namespace to be "Ready" ...
	E0719 05:57:19.221456    5884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-761300" hosting pod "kube-controller-manager-multinode-761300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:19.221456    5884 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c48b9" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:19.403970    5884 request.go:629] Waited for 182.1382ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c48b9
	I0719 05:57:19.404169    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c48b9
	I0719 05:57:19.404169    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.404280    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.404280    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.415617    5884 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0719 05:57:19.415617    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.415617    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.415893    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.415893    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.415893    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.415893    5884 round_trippers.go:580]     Audit-Id: 6b0b82cf-8a91-4377-b093-da0d5c860823
	I0719 05:57:19.415893    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.416395    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c48b9","generateName":"kube-proxy-","namespace":"kube-system","uid":"67e2ee42-a2c4-4ed1-a2bf-840702a255b4","resourceVersion":"1764","creationTimestamp":"2024-07-19T05:41:15Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06c026b7-a7b7-4276-a86c-fc9c51f31e4e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:41:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06c026b7-a7b7-4276-a86c-fc9c51f31e4e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0719 05:57:19.592845    5884 request.go:629] Waited for 175.4962ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/nodes/multinode-761300-m03
	I0719 05:57:19.593050    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300-m03
	I0719 05:57:19.593050    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.593050    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.593162    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.596579    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:19.596579    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.597497    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.597497    5884 round_trippers.go:580]     Audit-Id: ff5c41f5-1397-4082-9ecd-6ebc1b392b28
	I0719 05:57:19.597497    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.597497    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.597540    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.597540    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.597669    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m03","uid":"b19fd562-f462-4172-835f-56c42463b282","resourceVersion":"1773","creationTimestamp":"2024-07-19T05:52:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_52_28_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:52:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0719 05:57:19.598457    5884 pod_ready.go:97] node "multinode-761300-m03" hosting pod "kube-proxy-c48b9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300-m03" has status "Ready":"Unknown"
	I0719 05:57:19.598457    5884 pod_ready.go:81] duration metric: took 376.9964ms for pod "kube-proxy-c48b9" in "kube-system" namespace to be "Ready" ...
	E0719 05:57:19.598604    5884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-761300-m03" hosting pod "kube-proxy-c48b9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300-m03" has status "Ready":"Unknown"
	I0719 05:57:19.598604    5884 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c4z7f" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:19.795287    5884 request.go:629] Waited for 196.1408ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4z7f
	I0719 05:57:19.795484    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4z7f
	I0719 05:57:19.795484    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.795484    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.795575    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:19.798313    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:19.798313    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:19.798313    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:19.798313    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:19.798313    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:19.798313    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:19 GMT
	I0719 05:57:19.798313    5884 round_trippers.go:580]     Audit-Id: ffdd68f1-b7c1-4795-bd74-8c1b90942533
	I0719 05:57:19.798313    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:19.799298    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c4z7f","generateName":"kube-proxy-","namespace":"kube-system","uid":"17ff8aac-2d57-44fb-a3ec-f0d6ea181881","resourceVersion":"1888","creationTimestamp":"2024-07-19T05:33:15Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06c026b7-a7b7-4276-a86c-fc9c51f31e4e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06c026b7-a7b7-4276-a86c-fc9c51f31e4e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0719 05:57:19.998798    5884 request.go:629] Waited for 198.3651ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:19.998798    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:19.998798    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:19.998798    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:19.999011    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:20.002442    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:20.002748    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:20.002748    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:20.002806    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:20.002806    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:20.002806    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:20.002806    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:20 GMT
	I0719 05:57:20.002806    5884 round_trippers.go:580]     Audit-Id: c5babcf1-fddc-453e-b36a-f9c8206749df
	I0719 05:57:20.003026    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:20.003789    5884 pod_ready.go:97] node "multinode-761300" hosting pod "kube-proxy-c4z7f" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:20.003789    5884 pod_ready.go:81] duration metric: took 405.1793ms for pod "kube-proxy-c4z7f" in "kube-system" namespace to be "Ready" ...
	E0719 05:57:20.003789    5884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-761300" hosting pod "kube-proxy-c4z7f" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:20.003789    5884 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mjv8l" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:20.202616    5884 request.go:629] Waited for 198.7356ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjv8l
	I0719 05:57:20.203267    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjv8l
	I0719 05:57:20.203267    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:20.203267    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:20.203267    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:20.206849    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:20.207289    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:20.207336    5884 round_trippers.go:580]     Audit-Id: 0df9e7b0-45d4-4c1d-95f5-7b45a6c27213
	I0719 05:57:20.207336    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:20.207336    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:20.207336    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:20.207336    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:20.207384    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:20 GMT
	I0719 05:57:20.207457    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mjv8l","generateName":"kube-proxy-","namespace":"kube-system","uid":"4d0f7d34-4031-46d3-a580-a2d080d9d335","resourceVersion":"1787","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06c026b7-a7b7-4276-a86c-fc9c51f31e4e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06c026b7-a7b7-4276-a86c-fc9c51f31e4e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0719 05:57:20.405218    5884 request.go:629] Waited for 196.8375ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:57:20.405523    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:57:20.405523    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:20.405523    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:20.405523    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:20.409432    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:20.409617    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:20.409617    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:20.409617    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:20.409617    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:20.409617    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:20 GMT
	I0719 05:57:20.409617    5884 round_trippers.go:580]     Audit-Id: 6d77f774-6239-4948-bb6f-553cb42185f0
	I0719 05:57:20.409617    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:20.410347    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"1789","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4486 chars]
	I0719 05:57:20.411843    5884 pod_ready.go:97] node "multinode-761300-m02" hosting pod "kube-proxy-mjv8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300-m02" has status "Ready":"Unknown"
	I0719 05:57:20.411894    5884 pod_ready.go:81] duration metric: took 408.1003ms for pod "kube-proxy-mjv8l" in "kube-system" namespace to be "Ready" ...
	E0719 05:57:20.411894    5884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-761300-m02" hosting pod "kube-proxy-mjv8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300-m02" has status "Ready":"Unknown"
	I0719 05:57:20.411894    5884 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:20.592484    5884 request.go:629] Waited for 180.4254ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-761300
	I0719 05:57:20.592777    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-761300
	I0719 05:57:20.592777    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:20.592980    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:20.592980    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:20.597110    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:20.597110    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:20.597110    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:20.597613    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:20.597613    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:20.597613    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:20 GMT
	I0719 05:57:20.597613    5884 round_trippers.go:580]     Audit-Id: 6c13160b-6dff-4a59-9614-a2e00b682068
	I0719 05:57:20.597613    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:20.598366    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-761300","namespace":"kube-system","uid":"49a739d1-1ae3-4a41-aebc-0eb7b2b4f242","resourceVersion":"1812","creationTimestamp":"2024-07-19T05:33:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"baa57cf06d1c9cb3264d7de745e86d00","kubernetes.io/config.mirror":"baa57cf06d1c9cb3264d7de745e86d00","kubernetes.io/config.seen":"2024-07-19T05:33:02.001209067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5449 chars]
	I0719 05:57:20.798280    5884 request.go:629] Waited for 198.9027ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:20.798397    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:20.798397    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:20.798397    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:20.798702    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:20.802031    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:20.802905    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:20.802905    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:20.802905    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:20.802905    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:20 GMT
	I0719 05:57:20.802905    5884 round_trippers.go:580]     Audit-Id: 07248fc1-ebaa-4f45-9f4c-7a02680793e2
	I0719 05:57:20.802905    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:20.802905    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:20.803658    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:20.804287    5884 pod_ready.go:97] node "multinode-761300" hosting pod "kube-scheduler-multinode-761300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:20.804351    5884 pod_ready.go:81] duration metric: took 392.3845ms for pod "kube-scheduler-multinode-761300" in "kube-system" namespace to be "Ready" ...
	E0719 05:57:20.804456    5884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-761300" hosting pod "kube-scheduler-multinode-761300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300" has status "Ready":"False"
	I0719 05:57:20.804456    5884 pod_ready.go:38] duration metric: took 1.6407091s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:57:20.804456    5884 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 05:57:20.827115    5884 command_runner.go:130] > -16
	I0719 05:57:20.827299    5884 ops.go:34] apiserver oom_adj: -16
	I0719 05:57:20.827299    5884 kubeadm.go:597] duration metric: took 12.821865s to restartPrimaryControlPlane
	I0719 05:57:20.827299    5884 kubeadm.go:394] duration metric: took 12.8858005s to StartCluster
	I0719 05:57:20.827299    5884 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:57:20.827299    5884 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 05:57:20.830399    5884 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 05:57:20.832063    5884 start.go:235] Will wait 6m0s for node &{Name: IP:172.28.162.149 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 05:57:20.832063    5884 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0719 05:57:20.832710    5884 config.go:182] Loaded profile config "multinode-761300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:57:20.836580    5884 out.go:177] * Enabled addons: 
	I0719 05:57:20.844211    5884 out.go:177] * Verifying Kubernetes components...
	I0719 05:57:20.848339    5884 addons.go:510] duration metric: took 16.2757ms for enable addons: enabled=[]
	I0719 05:57:20.857811    5884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:57:21.147637    5884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0719 05:57:21.174019    5884 node_ready.go:35] waiting up to 6m0s for node "multinode-761300" to be "Ready" ...
	I0719 05:57:21.174086    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:21.174086    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:21.174086    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:21.174086    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:21.174795    5884 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0719 05:57:21.174795    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:21.178062    5884 round_trippers.go:580]     Audit-Id: a61d494e-ae1c-4976-bca1-94bbbfec8722
	I0719 05:57:21.178062    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:21.178062    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:21.178093    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:21.178093    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:21.178111    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:21 GMT
	I0719 05:57:21.178616    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:21.684584    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:21.684584    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:21.684584    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:21.684584    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:21.689482    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:21.689746    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:21.689746    5884 round_trippers.go:580]     Audit-Id: bf768ffd-ce34-488e-a8e2-890c21ce5cc9
	I0719 05:57:21.689746    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:21.689845    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:21.689845    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:21.689873    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:21.689873    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:21 GMT
	I0719 05:57:21.690116    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:22.184802    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:22.184802    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:22.184802    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:22.184802    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:22.189464    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:22.189464    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:22.189464    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:22 GMT
	I0719 05:57:22.189464    5884 round_trippers.go:580]     Audit-Id: 1904384c-5956-4842-9573-48c9351f8afd
	I0719 05:57:22.189464    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:22.189464    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:22.189464    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:22.189464    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:22.189464    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:22.683677    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:22.683677    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:22.683677    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:22.683677    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:22.686252    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:22.687186    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:22.687186    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:22 GMT
	I0719 05:57:22.687243    5884 round_trippers.go:580]     Audit-Id: 6a2c8282-9c58-464f-b07d-3330bb1baaa2
	I0719 05:57:22.687243    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:22.687243    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:22.687243    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:22.687243    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:22.687615    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:23.181443    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:23.181768    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:23.181768    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:23.181861    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:23.186508    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:23.187167    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:23.187167    5884 round_trippers.go:580]     Audit-Id: 34c042c7-0948-4810-a940-daa26fc77eb7
	I0719 05:57:23.187167    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:23.187167    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:23.187167    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:23.187167    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:23.187167    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:23 GMT
	I0719 05:57:23.187167    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:23.188031    5884 node_ready.go:53] node "multinode-761300" has status "Ready":"False"
	I0719 05:57:23.679175    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:23.679225    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:23.679225    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:23.679225    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:23.683812    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:23.684520    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:23.684520    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:23.684520    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:23 GMT
	I0719 05:57:23.684520    5884 round_trippers.go:580]     Audit-Id: 49fa6658-d7b7-4404-838f-4e049df09a0b
	I0719 05:57:23.684520    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:23.684520    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:23.684520    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:23.684520    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:24.177176    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:24.177176    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:24.177176    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:24.177176    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:24.188554    5884 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0719 05:57:24.188554    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:24.189087    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:24.189087    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:24.189087    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:24.189087    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:24 GMT
	I0719 05:57:24.189087    5884 round_trippers.go:580]     Audit-Id: 2f5f902c-14d8-4cf4-984c-f8234346aebc
	I0719 05:57:24.189087    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:24.189340    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:24.679252    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:24.679648    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:24.679648    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:24.679648    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:24.683310    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:24.683310    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:24.683310    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:24 GMT
	I0719 05:57:24.683310    5884 round_trippers.go:580]     Audit-Id: ea5e3670-a29d-47aa-b415-00a207ac7e58
	I0719 05:57:24.683505    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:24.683505    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:24.683505    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:24.683505    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:24.684163    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:25.177501    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:25.177501    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:25.177501    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:25.177501    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:25.181146    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:25.181503    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:25.181503    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:25.181503    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:25.181503    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:25 GMT
	I0719 05:57:25.181608    5884 round_trippers.go:580]     Audit-Id: fae9ad74-a2da-4327-9a65-94bd94bb0271
	I0719 05:57:25.181608    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:25.181608    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:25.182037    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:25.675652    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:25.675652    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:25.675652    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:25.675652    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:25.679252    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:25.679252    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:25.679252    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:25.679252    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:25.679252    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:25.679252    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:25.679252    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:25 GMT
	I0719 05:57:25.679252    5884 round_trippers.go:580]     Audit-Id: 8aa3bb72-2614-4667-a409-bed6ac78cd2d
	I0719 05:57:25.679577    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:25.680347    5884 node_ready.go:53] node "multinode-761300" has status "Ready":"False"
	I0719 05:57:26.174934    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:26.174934    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:26.174934    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:26.175044    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:26.179360    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:26.179489    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:26.179489    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:26 GMT
	I0719 05:57:26.179489    5884 round_trippers.go:580]     Audit-Id: 2f4fc3c6-106b-476d-8b72-0b1e466ccb70
	I0719 05:57:26.179489    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:26.179489    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:26.179489    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:26.179489    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:26.179766    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:26.675711    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:26.675711    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:26.675711    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:26.675711    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:26.679673    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:26.679857    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:26.679857    5884 round_trippers.go:580]     Audit-Id: a6055a1a-000b-4d75-b044-130baf0a7423
	I0719 05:57:26.679857    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:26.679857    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:26.679857    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:26.679857    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:26.679857    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:26 GMT
	I0719 05:57:26.680194    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:27.189058    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:27.189310    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:27.189310    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:27.189310    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:27.200018    5884 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0719 05:57:27.200996    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:27.200996    5884 round_trippers.go:580]     Audit-Id: 1253aa49-8dc0-4f27-bc11-d65e285499fd
	I0719 05:57:27.200996    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:27.201042    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:27.201042    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:27.201042    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:27.201042    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:27 GMT
	I0719 05:57:27.202346    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:27.675550    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:27.675684    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:27.675684    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:27.675684    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:27.679135    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:27.679932    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:27.679932    5884 round_trippers.go:580]     Audit-Id: 812a0eb9-4912-43cd-b534-b1704f28b62a
	I0719 05:57:27.679932    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:27.679932    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:27.679932    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:27.679932    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:27.679932    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:27 GMT
	I0719 05:57:27.680347    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:27.680834    5884 node_ready.go:53] node "multinode-761300" has status "Ready":"False"
	I0719 05:57:28.174776    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:28.174776    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:28.174867    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:28.174867    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:28.178673    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:28.178673    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:28.179044    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:28.179044    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:28.179044    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:28.179044    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:28 GMT
	I0719 05:57:28.179044    5884 round_trippers.go:580]     Audit-Id: 483a3fd1-678e-46e8-b070-5109240038d9
	I0719 05:57:28.179044    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:28.179264    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:28.682355    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:28.682355    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:28.682355    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:28.682355    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:28.685964    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:28.685964    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:28.685964    5884 round_trippers.go:580]     Audit-Id: d31bceb4-bc9e-4b03-a367-4fb3307f3ea2
	I0719 05:57:28.686266    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:28.686266    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:28.686266    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:28.686266    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:28.686266    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:28 GMT
	I0719 05:57:28.686407    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1802","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5366 chars]
	I0719 05:57:29.175033    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:29.175033    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:29.175033    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:29.175033    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:29.179087    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:29.179087    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:29.179087    5884 round_trippers.go:580]     Audit-Id: 3e0ba758-175e-44b6-8bad-22d9814fda9f
	I0719 05:57:29.179087    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:29.179087    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:29.179087    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:29.179087    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:29.179210    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:29 GMT
	I0719 05:57:29.179436    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1918","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0719 05:57:29.684585    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:29.684585    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:29.684585    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:29.684664    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:29.687912    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:29.688712    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:29.688712    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:29 GMT
	I0719 05:57:29.688712    5884 round_trippers.go:580]     Audit-Id: 6d0c4b6d-01d4-4187-b540-a38b21aff691
	I0719 05:57:29.688712    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:29.688712    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:29.688712    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:29.688817    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:29.689157    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1918","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0719 05:57:29.689705    5884 node_ready.go:53] node "multinode-761300" has status "Ready":"False"
	I0719 05:57:30.182215    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:30.182215    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:30.182215    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:30.182215    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:30.185939    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:30.186542    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:30.186542    5884 round_trippers.go:580]     Audit-Id: 8231ac2e-ac5b-403b-9b45-aa3fff4c6bc2
	I0719 05:57:30.186542    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:30.186542    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:30.186542    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:30.186542    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:30.186542    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:30 GMT
	I0719 05:57:30.186542    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1918","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0719 05:57:30.682945    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:30.683005    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:30.683005    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:30.683005    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:30.685474    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:30.685474    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:30.685474    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:30 GMT
	I0719 05:57:30.685474    5884 round_trippers.go:580]     Audit-Id: fc5ee575-ce03-487a-8022-c853c779625e
	I0719 05:57:30.686340    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:30.686340    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:30.686340    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:30.686340    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:30.686492    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1918","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5487 chars]
	I0719 05:57:31.187307    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:31.187307    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:31.187406    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:31.187406    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:31.191781    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:31.191946    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:31.191946    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:31.191946    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:31.191946    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:31.191946    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:31.191946    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:31 GMT
	I0719 05:57:31.191946    5884 round_trippers.go:580]     Audit-Id: 7f123028-232c-44fb-8e6e-b160e9feac5d
	I0719 05:57:31.192168    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:31.193066    5884 node_ready.go:49] node "multinode-761300" has status "Ready":"True"
	I0719 05:57:31.193066    5884 node_ready.go:38] duration metric: took 10.0189245s for node "multinode-761300" to be "Ready" ...
	I0719 05:57:31.193190    5884 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:57:31.193308    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods
	I0719 05:57:31.193308    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:31.193394    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:31.193394    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:31.203640    5884 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0719 05:57:31.203640    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:31.203640    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:31 GMT
	I0719 05:57:31.203640    5884 round_trippers.go:580]     Audit-Id: afdc21f1-569c-4da3-a2a2-eda37222cf04
	I0719 05:57:31.203640    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:31.203640    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:31.203640    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:31.203640    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:31.205924    5884 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1929"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86673 chars]
	I0719 05:57:31.210414    5884 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:31.210642    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:31.210642    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:31.210715    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:31.210715    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:31.221855    5884 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0719 05:57:31.221855    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:31.221855    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:31 GMT
	I0719 05:57:31.221855    5884 round_trippers.go:580]     Audit-Id: c034cc9f-21e1-4df4-b41d-2818b691d5ff
	I0719 05:57:31.221855    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:31.221855    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:31.221855    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:31.221855    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:31.221855    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:31.222877    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:31.222877    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:31.222877    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:31.222877    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:31.226921    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:31.227396    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:31.227396    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:31 GMT
	I0719 05:57:31.227396    5884 round_trippers.go:580]     Audit-Id: 8cadac84-e110-4df6-bbd7-5b10af75aebc
	I0719 05:57:31.227396    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:31.227396    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:31.227396    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:31.227396    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:31.227532    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:31.718875    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:31.718949    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:31.718949    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:31.718949    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:31.723538    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:31.724632    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:31.724632    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:31.724632    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:31 GMT
	I0719 05:57:31.724632    5884 round_trippers.go:580]     Audit-Id: 290876c2-1840-41c2-ad1c-705fc628799d
	I0719 05:57:31.724632    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:31.724632    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:31.724632    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:31.724995    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:31.726040    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:31.726095    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:31.726095    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:31.726095    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:31.729630    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:31.729630    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:31.729630    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:31.729630    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:31.729630    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:31.730050    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:31 GMT
	I0719 05:57:31.730050    5884 round_trippers.go:580]     Audit-Id: 23606792-e254-43e4-92cb-4f2215cfa416
	I0719 05:57:31.730050    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:31.730404    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:32.217451    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:32.217628    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:32.217628    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:32.217628    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:32.220514    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:32.221549    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:32.221549    5884 round_trippers.go:580]     Audit-Id: c2357e2d-c721-4d32-8a87-76efad274056
	I0719 05:57:32.221549    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:32.221549    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:32.221549    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:32.221549    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:32.221549    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:32 GMT
	I0719 05:57:32.221803    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:32.222674    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:32.222764    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:32.222764    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:32.222764    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:32.225044    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:32.225812    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:32.225812    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:32.225812    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:32.225812    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:32.225812    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:32.225812    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:32 GMT
	I0719 05:57:32.225812    5884 round_trippers.go:580]     Audit-Id: bd205e2f-5958-4edd-9d86-090fff220c47
	I0719 05:57:32.226226    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:32.718409    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:32.718409    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:32.718409    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:32.718409    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:32.723579    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:32.723579    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:32.723579    5884 round_trippers.go:580]     Audit-Id: 205d375f-e65a-4a91-ab54-3ea9e43de3f1
	I0719 05:57:32.723579    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:32.723579    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:32.723579    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:32.724221    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:32.724221    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:32 GMT
	I0719 05:57:32.724348    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:32.725099    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:32.725099    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:32.725099    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:32.725099    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:32.728982    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:32.728982    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:32.728982    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:32.728982    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:32.728982    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:32.728982    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:32 GMT
	I0719 05:57:32.728982    5884 round_trippers.go:580]     Audit-Id: b4bbf629-008a-444c-98a5-897a92ec0b2d
	I0719 05:57:32.728982    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:32.729869    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:33.220753    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:33.220753    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:33.220860    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:33.220860    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:33.224810    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:33.225820    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:33.225843    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:33 GMT
	I0719 05:57:33.225843    5884 round_trippers.go:580]     Audit-Id: fe947b58-154c-4fc1-83bc-fd9ada9e33f6
	I0719 05:57:33.225843    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:33.225843    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:33.225843    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:33.225843    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:33.226651    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:33.227961    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:33.227961    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:33.227961    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:33.227961    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:33.232790    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:33.232790    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:33.232886    5884 round_trippers.go:580]     Audit-Id: 7b559462-ae38-42f1-adcb-7e5962fc1b0e
	I0719 05:57:33.232886    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:33.232886    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:33.232886    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:33.232886    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:33.232886    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:33 GMT
	I0719 05:57:33.233045    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:33.233592    5884 pod_ready.go:102] pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace has status "Ready":"False"
	I0719 05:57:33.718183    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:33.718266    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:33.718266    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:33.718320    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:33.724505    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:33.724505    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:33.724505    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:33.724505    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:33.724505    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:33.724505    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:33.724505    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:33 GMT
	I0719 05:57:33.724590    5884 round_trippers.go:580]     Audit-Id: bc7c878d-41a3-4d8c-b58f-23d47b8d3dad
	I0719 05:57:33.724660    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:33.725810    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:33.725836    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:33.725836    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:33.725836    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:33.728833    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:33.728833    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:33.729494    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:33.729494    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:33.729494    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:33.729494    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:33 GMT
	I0719 05:57:33.729617    5884 round_trippers.go:580]     Audit-Id: dcd65a7c-36b1-475c-af24-a2239865f663
	I0719 05:57:33.729617    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:33.730104    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:34.218199    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:34.218199    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:34.218291    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:34.218291    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:34.222693    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:34.223406    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:34.223406    5884 round_trippers.go:580]     Audit-Id: 5e4cdc02-8203-4140-ac90-2e7ed612d426
	I0719 05:57:34.223406    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:34.223406    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:34.223495    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:34.223495    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:34.223495    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:34 GMT
	I0719 05:57:34.223823    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:34.224781    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:34.224835    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:34.224835    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:34.224835    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:34.228013    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:34.228013    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:34.228013    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:34.228091    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:34 GMT
	I0719 05:57:34.228091    5884 round_trippers.go:580]     Audit-Id: 7f5f8a80-87c9-428a-9d06-850af648a0d6
	I0719 05:57:34.228091    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:34.228091    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:34.228091    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:34.228145    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:34.721308    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:34.721372    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:34.721372    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:34.721372    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:34.725410    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:34.725613    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:34.725613    5884 round_trippers.go:580]     Audit-Id: ec7e1a3d-9e85-428f-ac20-20aa2889bd24
	I0719 05:57:34.725613    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:34.725613    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:34.725613    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:34.725613    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:34.725613    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:34 GMT
	I0719 05:57:34.725856    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:34.726661    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:34.726720    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:34.726720    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:34.726720    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:34.729658    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:34.730603    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:34.730703    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:34 GMT
	I0719 05:57:34.730703    5884 round_trippers.go:580]     Audit-Id: 25467012-cfda-4144-aaa1-bea5e09d2e54
	I0719 05:57:34.730703    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:34.730703    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:34.730703    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:34.730743    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:34.730857    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:35.218593    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:35.218593    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:35.218593    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:35.218593    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:35.224603    5884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 05:57:35.224603    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:35.224745    5884 round_trippers.go:580]     Audit-Id: e67ea614-bfb9-4aa3-a56a-f0ffe9dcae8e
	I0719 05:57:35.224745    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:35.224745    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:35.224745    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:35.224806    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:35.224806    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:35 GMT
	I0719 05:57:35.224806    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:35.225932    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:35.225932    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:35.225981    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:35.225981    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:35.228616    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:35.228616    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:35.228616    5884 round_trippers.go:580]     Audit-Id: 00102d7d-d4f0-4040-b120-6f6e2516f6d5
	I0719 05:57:35.228616    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:35.228616    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:35.228616    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:35.228616    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:35.228616    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:35 GMT
	I0719 05:57:35.229565    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:35.716893    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:35.716985    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:35.716985    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:35.717095    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:35.720883    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:35.720883    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:35.720883    5884 round_trippers.go:580]     Audit-Id: f430d950-84ad-497e-a548-7cf568ee616b
	I0719 05:57:35.720883    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:35.721780    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:35.721780    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:35.721780    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:35.721780    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:35 GMT
	I0719 05:57:35.721955    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:35.723044    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:35.723044    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:35.723044    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:35.723044    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:35.726441    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:35.726441    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:35.726441    5884 round_trippers.go:580]     Audit-Id: cb4d1dea-e606-4597-9172-d9d4cea8884a
	I0719 05:57:35.726441    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:35.726676    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:35.726676    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:35.726676    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:35.726676    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:35 GMT
	I0719 05:57:35.727050    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:35.727120    5884 pod_ready.go:102] pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace has status "Ready":"False"
	I0719 05:57:36.222276    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:36.222276    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:36.222276    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:36.222276    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:36.226349    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:36.226349    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:36.226349    5884 round_trippers.go:580]     Audit-Id: ec013313-b8ac-42fd-a8b4-efc2c183045f
	I0719 05:57:36.226455    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:36.226455    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:36.226455    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:36.226455    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:36.226455    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:36 GMT
	I0719 05:57:36.226585    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:36.227327    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:36.227412    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:36.227684    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:36.227684    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:36.230899    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:36.230899    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:36.230899    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:36.230899    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:36.230899    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:36 GMT
	I0719 05:57:36.230899    5884 round_trippers.go:580]     Audit-Id: 21afca9e-83c6-4d0c-b0cb-2a3bd4b3fa83
	I0719 05:57:36.230899    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:36.230899    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:36.231473    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:36.721222    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:36.721222    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:36.721222    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:36.721222    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:36.726845    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:36.726941    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:36.726941    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:36.726941    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:36.726941    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:36 GMT
	I0719 05:57:36.726941    5884 round_trippers.go:580]     Audit-Id: 90a8a302-e50f-4a0c-97e4-aae9a0ffe7b9
	I0719 05:57:36.727036    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:36.727036    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:36.727790    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:36.728531    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:36.728637    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:36.728637    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:36.728637    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:36.730592    5884 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0719 05:57:36.730592    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:36.731649    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:36.731649    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:36.731649    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:36.731649    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:36 GMT
	I0719 05:57:36.731649    5884 round_trippers.go:580]     Audit-Id: d938666a-a613-4ef5-a33a-838290815684
	I0719 05:57:36.731649    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:36.732628    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:37.224485    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:37.224485    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:37.224485    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:37.224485    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:37.228083    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:37.228663    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:37.228663    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:37.228663    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:37.228663    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:37.228663    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:37 GMT
	I0719 05:57:37.228663    5884 round_trippers.go:580]     Audit-Id: 75f9ce97-11a1-40ee-b5fe-19d2d727c920
	I0719 05:57:37.228663    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:37.229134    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:37.229993    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:37.229993    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:37.229993    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:37.229993    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:37.232584    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:37.232584    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:37.233085    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:37 GMT
	I0719 05:57:37.233085    5884 round_trippers.go:580]     Audit-Id: 30ee913b-1644-43b2-a901-b242b3bd7063
	I0719 05:57:37.233085    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:37.233085    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:37.233085    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:37.233085    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:37.234241    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:37.710998    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:37.711204    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:37.711257    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:37.711257    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:37.716653    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:37.716653    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:37.716653    5884 round_trippers.go:580]     Audit-Id: 7533e1ef-b110-4af8-916c-7b6e8aff2a58
	I0719 05:57:37.716653    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:37.716653    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:37.716653    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:37.717197    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:37.717197    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:37 GMT
	I0719 05:57:37.718166    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:37.719418    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:37.719418    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:37.719418    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:37.719418    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:37.723407    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:37.723407    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:37.723407    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:37 GMT
	I0719 05:57:37.724053    5884 round_trippers.go:580]     Audit-Id: e9abd70c-d03f-44d0-8797-871de63ff944
	I0719 05:57:37.724053    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:37.724053    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:37.724053    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:37.724104    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:37.725119    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:38.211329    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:38.211415    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:38.211415    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:38.211415    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:38.214866    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:38.215885    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:38.215993    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:38.215993    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:38.215993    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:38.215993    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:38.215993    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:38 GMT
	I0719 05:57:38.216069    5884 round_trippers.go:580]     Audit-Id: f462f7f8-0783-44a2-8510-2ddd6e34e754
	I0719 05:57:38.217134    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:38.218039    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:38.218039    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:38.218039    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:38.218039    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:38.220395    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:38.220395    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:38.220395    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:38.220395    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:38.221124    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:38.221124    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:38.221124    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:38 GMT
	I0719 05:57:38.221124    5884 round_trippers.go:580]     Audit-Id: 66cf962d-f577-430e-b0a5-336d805b6155
	I0719 05:57:38.221359    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:38.222023    5884 pod_ready.go:102] pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace has status "Ready":"False"
	I0719 05:57:38.712236    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:38.712460    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:38.712460    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:38.712460    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:38.716872    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:38.716872    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:38.716872    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:38.716872    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:38.716872    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:38.716872    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:38 GMT
	I0719 05:57:38.716872    5884 round_trippers.go:580]     Audit-Id: 8333fb72-d0c9-4ddd-9c4e-4846235c9cc8
	I0719 05:57:38.717421    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:38.717566    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:38.718969    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:38.718969    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:38.718969    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:38.718969    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:38.721764    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:38.722674    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:38.722674    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:38.722674    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:38.722674    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:38 GMT
	I0719 05:57:38.722674    5884 round_trippers.go:580]     Audit-Id: 92563dc0-ff26-4257-bd6c-395f6de67496
	I0719 05:57:38.722674    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:38.722674    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:38.722674    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:39.213271    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:39.213347    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:39.213347    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:39.213347    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:39.217802    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:39.217852    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:39.217852    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:39.217852    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:39.217852    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:39.217852    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:39.217852    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:39 GMT
	I0719 05:57:39.217852    5884 round_trippers.go:580]     Audit-Id: ac24a718-86c1-48d7-bc98-1e2a61c39cf9
	I0719 05:57:39.217985    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:39.219037    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:39.219228    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:39.219414    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:39.219450    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:39.222605    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:39.222605    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:39.222605    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:39.222605    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:39.222605    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:39 GMT
	I0719 05:57:39.222605    5884 round_trippers.go:580]     Audit-Id: 3277de12-ba3a-4fc0-a044-387de48d7b9c
	I0719 05:57:39.223430    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:39.223430    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:39.223788    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:39.712350    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:39.712445    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:39.712474    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:39.712474    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:39.718227    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:39.718290    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:39.718290    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:39.718290    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:39.718290    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:39 GMT
	I0719 05:57:39.718290    5884 round_trippers.go:580]     Audit-Id: 148ae5ce-ec19-4675-a258-8f01f2988bcc
	I0719 05:57:39.718290    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:39.718290    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:39.718290    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:39.719129    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:39.719129    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:39.719129    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:39.719129    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:39.722792    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:39.722792    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:39.722792    5884 round_trippers.go:580]     Audit-Id: e94ffbf6-71cd-43b4-94a8-37a720af667d
	I0719 05:57:39.722792    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:39.722792    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:39.722890    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:39.722890    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:39.722890    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:39 GMT
	I0719 05:57:39.723182    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:40.225141    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:40.225141    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:40.225282    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:40.225282    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:40.229686    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:40.229782    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:40.229782    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:40.229782    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:40.229782    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:40 GMT
	I0719 05:57:40.229782    5884 round_trippers.go:580]     Audit-Id: cc6906cf-8135-4ad8-af53-e66fb84f3d10
	I0719 05:57:40.229782    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:40.229782    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:40.229782    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:40.230962    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:40.230962    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:40.231021    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:40.231021    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:40.234334    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:40.234334    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:40.234334    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:40.234334    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:40 GMT
	I0719 05:57:40.234334    5884 round_trippers.go:580]     Audit-Id: c130b00f-dc17-4000-a096-0fc34bfffd9f
	I0719 05:57:40.234334    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:40.234334    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:40.234334    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:40.235122    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:40.235592    5884 pod_ready.go:102] pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace has status "Ready":"False"
	I0719 05:57:40.725829    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:40.725922    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:40.725922    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:40.725922    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:40.729368    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:40.730109    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:40.730109    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:40.730109    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:40 GMT
	I0719 05:57:40.730109    5884 round_trippers.go:580]     Audit-Id: a4823366-fc92-43b7-bf67-e56c7366c554
	I0719 05:57:40.730250    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:40.730250    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:40.730250    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:40.730436    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:40.731254    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:40.731254    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:40.731309    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:40.731309    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:40.733717    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:40.733717    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:40.733717    5884 round_trippers.go:580]     Audit-Id: 61341f82-a17f-46f7-925f-2cdb18d69e23
	I0719 05:57:40.733717    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:40.733717    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:40.733717    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:40.733717    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:40.733717    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:40 GMT
	I0719 05:57:40.734573    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:41.217223    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:41.217223    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:41.217223    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:41.217223    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:41.221805    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:41.222344    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:41.222344    5884 round_trippers.go:580]     Audit-Id: cd65b363-0234-4a37-a8e1-54a2dd137696
	I0719 05:57:41.222344    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:41.222344    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:41.222344    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:41.222344    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:41.222344    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:41 GMT
	I0719 05:57:41.222656    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:41.223247    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:41.223842    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:41.223842    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:41.223842    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:41.228132    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:41.228303    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:41.228303    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:41.228303    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:41 GMT
	I0719 05:57:41.228303    5884 round_trippers.go:580]     Audit-Id: 4ceee7b3-6328-4073-8bd1-cbc057fd3c55
	I0719 05:57:41.228303    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:41.228303    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:41.228303    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:41.229151    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:41.718297    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:41.718297    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:41.718297    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:41.718297    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:41.722911    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:41.723536    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:41.723536    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:41 GMT
	I0719 05:57:41.723536    5884 round_trippers.go:580]     Audit-Id: 0b317b1e-389f-485a-8fda-59079b240f72
	I0719 05:57:41.723536    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:41.723536    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:41.723536    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:41.723536    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:41.723737    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:41.724592    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:41.724592    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:41.724592    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:41.724592    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:41.727703    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:41.728410    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:41.728410    5884 round_trippers.go:580]     Audit-Id: b8570dee-cf96-4099-b8d7-d3fe80a2d921
	I0719 05:57:41.728410    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:41.728410    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:41.728410    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:41.728410    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:41.728460    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:41 GMT
	I0719 05:57:41.728730    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:42.218535    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:42.218601    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:42.218601    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:42.218658    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:42.222545    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:42.223179    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:42.223280    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:42.223327    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:42 GMT
	I0719 05:57:42.223327    5884 round_trippers.go:580]     Audit-Id: f741ce2a-e14e-48d3-828c-f3be28ee8a8c
	I0719 05:57:42.223327    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:42.223327    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:42.223327    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:42.223327    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:42.224096    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:42.224233    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:42.224233    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:42.224233    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:42.228475    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:42.228475    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:42.228475    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:42.228475    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:42.228475    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:42.228475    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:42 GMT
	I0719 05:57:42.228475    5884 round_trippers.go:580]     Audit-Id: 0b97a496-4726-4f0b-a70e-cc1e05908846
	I0719 05:57:42.228475    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:42.229394    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:42.717172    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:42.717172    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:42.717172    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:42.717172    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:42.722197    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:42.722197    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:42.722197    5884 round_trippers.go:580]     Audit-Id: 78eaed26-b2ff-4d4b-bb6d-b1c7f01b7694
	I0719 05:57:42.722197    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:42.722197    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:42.722197    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:42.722197    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:42.722197    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:42 GMT
	I0719 05:57:42.722590    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:42.723306    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:42.723306    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:42.723306    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:42.723306    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:42.725898    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:42.725898    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:42.725898    5884 round_trippers.go:580]     Audit-Id: b57b47c5-2fca-44bd-9fbe-2178973f84f0
	I0719 05:57:42.725898    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:42.726624    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:42.726624    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:42.726624    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:42.726624    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:42 GMT
	I0719 05:57:42.726880    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:42.727449    5884 pod_ready.go:102] pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace has status "Ready":"False"
	I0719 05:57:43.218392    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:43.218506    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:43.218506    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:43.218506    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:43.222663    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:43.222663    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:43.222663    5884 round_trippers.go:580]     Audit-Id: 6a39a8d7-25cb-4291-ad2c-c3dea9fb549f
	I0719 05:57:43.222663    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:43.223007    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:43.223007    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:43.223007    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:43.223007    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:43 GMT
	I0719 05:57:43.223327    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:43.224005    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:43.224005    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:43.224078    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:43.224078    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:43.226254    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:43.226799    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:43.226882    5884 round_trippers.go:580]     Audit-Id: c0fa95e4-2fa5-48bd-8e54-c8e77c98702f
	I0719 05:57:43.226882    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:43.226964    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:43.226964    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:43.226964    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:43.226964    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:43 GMT
	I0719 05:57:43.227094    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:43.716296    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:43.716409    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:43.716409    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:43.716409    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:43.720100    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:43.720100    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:43.720100    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:43.720100    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:43.720100    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:43 GMT
	I0719 05:57:43.720100    5884 round_trippers.go:580]     Audit-Id: c1f1a8e6-2e17-4f1d-9bab-38c3b49fa869
	I0719 05:57:43.720100    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:43.720100    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:43.721325    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:43.721581    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:43.722100    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:43.722100    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:43.722100    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:43.727545    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:43.727545    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:43.727545    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:43 GMT
	I0719 05:57:43.727605    5884 round_trippers.go:580]     Audit-Id: fb23ca2d-7471-45e0-8e93-c469a9c33d56
	I0719 05:57:43.727605    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:43.727628    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:43.727654    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:43.727688    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:43.727863    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:44.218623    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:44.218623    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:44.218623    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:44.218623    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:44.222250    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:44.222739    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:44.222739    5884 round_trippers.go:580]     Audit-Id: 74f90e38-5c6c-4d0d-b92d-7dc563759c20
	I0719 05:57:44.222739    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:44.222739    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:44.222739    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:44.222739    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:44.222739    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:44 GMT
	I0719 05:57:44.223022    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:44.223885    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:44.223885    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:44.223885    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:44.223885    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:44.226651    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:44.226651    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:44.227381    5884 round_trippers.go:580]     Audit-Id: 05021496-3c43-4e00-ab4e-28b188fe8fca
	I0719 05:57:44.227522    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:44.227522    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:44.227571    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:44.227571    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:44.227571    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:44 GMT
	I0719 05:57:44.227571    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:44.718308    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:44.718472    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:44.718472    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:44.718472    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:44.723359    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:44.723359    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:44.723458    5884 round_trippers.go:580]     Audit-Id: 42876ad6-1ff2-46e9-b88e-1d40358ecca3
	I0719 05:57:44.723458    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:44.723458    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:44.723544    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:44.723544    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:44.723544    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:44 GMT
	I0719 05:57:44.723682    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:44.724805    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:44.724805    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:44.724900    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:44.724900    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:44.728092    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:44.728092    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:44.728092    5884 round_trippers.go:580]     Audit-Id: c5840966-3d21-47ae-88b5-494224da81b5
	I0719 05:57:44.728092    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:44.728092    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:44.728092    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:44.728092    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:44.728092    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:44 GMT
	I0719 05:57:44.728856    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:44.729518    5884 pod_ready.go:102] pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace has status "Ready":"False"
	I0719 05:57:45.213970    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:45.213970    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:45.213970    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:45.213970    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:45.217568    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:45.217568    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:45.217568    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:45.218148    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:45.218148    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:45.218148    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:45 GMT
	I0719 05:57:45.218148    5884 round_trippers.go:580]     Audit-Id: e6a021f7-f3a1-4449-83ec-c627c89c7499
	I0719 05:57:45.218148    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:45.218410    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:45.219407    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:45.219407    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:45.219407    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:45.219407    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:45.223252    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:45.223252    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:45.223252    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:45 GMT
	I0719 05:57:45.223252    5884 round_trippers.go:580]     Audit-Id: 4925b829-0c12-4702-a0ea-f9cbb370e6dc
	I0719 05:57:45.223252    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:45.223252    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:45.223252    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:45.223252    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:45.223252    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:45.712711    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:45.712800    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:45.712800    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:45.712912    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:45.716655    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:45.717158    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:45.717158    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:45.717158    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:45.717233    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:45 GMT
	I0719 05:57:45.717233    5884 round_trippers.go:580]     Audit-Id: 64331b88-0ec2-4f96-8fbf-9535f79361a4
	I0719 05:57:45.717233    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:45.717289    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:45.717471    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:45.718188    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:45.718188    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:45.718351    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:45.718351    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:45.722329    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:45.722329    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:45.722329    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:45.722329    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:45.722329    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:45 GMT
	I0719 05:57:45.722329    5884 round_trippers.go:580]     Audit-Id: 6d6a4fe7-d624-4699-b65f-b71161a1c450
	I0719 05:57:45.722329    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:45.722329    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:45.722329    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:46.217402    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:46.217526    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:46.217526    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:46.217526    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:46.220391    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:46.220391    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:46.221389    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:46.221389    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:46.221389    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:46 GMT
	I0719 05:57:46.221466    5884 round_trippers.go:580]     Audit-Id: d44663a1-24b0-472e-b10b-6aa5aed482eb
	I0719 05:57:46.221466    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:46.221466    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:46.221840    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:46.223324    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:46.223368    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:46.223368    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:46.223368    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:46.225674    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:46.225674    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:46.225674    5884 round_trippers.go:580]     Audit-Id: 98465590-ac73-4b3d-bd20-980c894fe860
	I0719 05:57:46.225674    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:46.225674    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:46.225674    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:46.225674    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:46.225674    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:46 GMT
	I0719 05:57:46.226566    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:46.720068    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:46.720068    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:46.720162    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:46.720162    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:46.725635    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:46.725635    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:46.725635    5884 round_trippers.go:580]     Audit-Id: 5ba8fee6-ab1e-40a2-822a-ef0843396572
	I0719 05:57:46.725635    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:46.725635    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:46.726467    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:46.726467    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:46.726467    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:46 GMT
	I0719 05:57:46.727005    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:46.727959    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:46.727959    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:46.727959    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:46.727959    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:46.731582    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:46.731582    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:46.731722    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:46.731722    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:46.731722    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:46.731722    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:46.731722    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:46 GMT
	I0719 05:57:46.731722    5884 round_trippers.go:580]     Audit-Id: 7740db36-712e-4f61-a783-394603e1fe1c
	I0719 05:57:46.732481    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:46.732979    5884 pod_ready.go:102] pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace has status "Ready":"False"
	I0719 05:57:47.220527    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:47.220527    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:47.220527    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:47.220527    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:47.224196    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:47.224196    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:47.224286    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:47 GMT
	I0719 05:57:47.224286    5884 round_trippers.go:580]     Audit-Id: 15a252ac-5573-49ec-97ba-1117fb2cb512
	I0719 05:57:47.224286    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:47.224286    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:47.224286    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:47.224286    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:47.224476    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:47.225462    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:47.225528    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:47.225528    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:47.225528    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:47.227832    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:47.227832    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:47.228574    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:47.228574    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:47.228574    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:47.228574    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:47 GMT
	I0719 05:57:47.228574    5884 round_trippers.go:580]     Audit-Id: 6d2c0a51-e0c9-46cc-8ab8-420b153c9b8c
	I0719 05:57:47.228574    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:47.228903    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:47.720889    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:47.720889    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:47.721042    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:47.721042    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:47.730436    5884 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0719 05:57:47.730773    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:47.730773    5884 round_trippers.go:580]     Audit-Id: 965b04ea-429f-43a3-a875-6a3530ef66ad
	I0719 05:57:47.730773    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:47.730773    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:47.730773    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:47.730773    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:47.730878    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:47 GMT
	I0719 05:57:47.733445    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:47.734250    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:47.734250    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:47.734250    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:47.734250    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:47.740755    5884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 05:57:47.740755    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:47.740755    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:47.740755    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:47.740755    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:47 GMT
	I0719 05:57:47.740755    5884 round_trippers.go:580]     Audit-Id: 99d342a0-9f7f-4386-849d-ca4f3edda4f6
	I0719 05:57:47.740755    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:47.740755    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:47.740755    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:48.222768    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:48.222927    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:48.222927    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:48.222927    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:48.227863    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:48.227971    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:48.228146    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:48.228146    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:48.228146    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:48.228146    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:48.228146    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:48 GMT
	I0719 05:57:48.228146    5884 round_trippers.go:580]     Audit-Id: 17efbd44-f15d-4e8a-a229-7b279b0ca2ec
	I0719 05:57:48.228297    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:48.229030    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:48.229030    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:48.229030    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:48.229030    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:48.232344    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:48.232344    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:48.232344    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:48.232344    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:48.232344    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:48.232344    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:48.232344    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:48 GMT
	I0719 05:57:48.232344    5884 round_trippers.go:580]     Audit-Id: aca9bff1-9483-486d-bec4-59f257607d55
	I0719 05:57:48.233300    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:48.711506    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:48.711506    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:48.711506    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:48.711506    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:48.717076    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:48.717076    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:48.717076    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:48 GMT
	I0719 05:57:48.717076    5884 round_trippers.go:580]     Audit-Id: 726dce85-9290-4920-9f3d-8c677438fe8b
	I0719 05:57:48.717076    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:48.717076    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:48.717076    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:48.717076    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:48.718086    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:48.718086    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:48.718086    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:48.718086    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:48.718086    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:48.721095    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:48.721095    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:48.721095    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:48.721095    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:48.721095    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:48 GMT
	I0719 05:57:48.721095    5884 round_trippers.go:580]     Audit-Id: 80344adc-49b7-4373-8dc3-58be220e328a
	I0719 05:57:48.721095    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:48.721095    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:48.722092    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:49.217258    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:49.217523    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:49.217523    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:49.217523    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:49.221117    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:49.222066    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:49.222099    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:49.222099    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:49.222099    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:49 GMT
	I0719 05:57:49.222099    5884 round_trippers.go:580]     Audit-Id: 5d1f6e2d-9c90-41d1-ba1e-1f226745715c
	I0719 05:57:49.222099    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:49.222099    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:49.222377    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:49.223149    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:49.223234    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:49.223234    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:49.223234    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:49.227153    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:49.227153    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:49.227153    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:49.227560    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:49.227560    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:49 GMT
	I0719 05:57:49.227560    5884 round_trippers.go:580]     Audit-Id: 2cc4af4d-c324-40e7-9720-33d56fb658e5
	I0719 05:57:49.227560    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:49.227560    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:49.227954    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:49.228219    5884 pod_ready.go:102] pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace has status "Ready":"False"
	I0719 05:57:49.712735    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:49.712735    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:49.712735    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:49.712735    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:49.715748    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:49.716482    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:49.716482    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:49.716482    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:49.716601    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:49 GMT
	I0719 05:57:49.716601    5884 round_trippers.go:580]     Audit-Id: 6eacfce9-e49d-479a-a42e-e472738aafe6
	I0719 05:57:49.716601    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:49.716601    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:49.716846    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1817","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6843 chars]
	I0719 05:57:49.717695    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:49.717764    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:49.717764    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:49.717764    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:49.719764    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:49.720344    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:49.720344    5884 round_trippers.go:580]     Audit-Id: 6e0ca09d-55ca-4571-b079-3fac93127ac9
	I0719 05:57:49.720344    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:49.720344    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:49.720344    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:49.720344    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:49.720344    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:49 GMT
	I0719 05:57:49.720617    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:50.220970    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:50.221042    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:50.221042    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:50.221042    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:50.225780    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:50.225780    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:50.225780    5884 round_trippers.go:580]     Audit-Id: 0fc439a9-db2c-4c60-a543-de18fca024ed
	I0719 05:57:50.225780    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:50.225780    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:50.225780    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:50.225780    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:50.225780    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:50 GMT
	I0719 05:57:50.225780    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1955","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7019 chars]
	I0719 05:57:50.227091    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:50.227091    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:50.227091    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:50.227091    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:50.230680    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:50.230680    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:50.230680    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:50.230680    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:50.231028    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:50.231028    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:50.231028    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:50 GMT
	I0719 05:57:50.231028    5884 round_trippers.go:580]     Audit-Id: d8153fc0-3dfc-428f-a38a-52e750c50586
	I0719 05:57:50.231304    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:50.718489    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:50.718489    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:50.718602    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:50.718602    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:50.726910    5884 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0719 05:57:50.727026    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:50.727026    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:50.727026    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:50 GMT
	I0719 05:57:50.727148    5884 round_trippers.go:580]     Audit-Id: 1d830e8e-4a2b-42bb-9c73-a68f59180970
	I0719 05:57:50.727163    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:50.727163    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:50.727163    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:50.727262    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1955","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7019 chars]
	I0719 05:57:50.728134    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:50.728134    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:50.728134    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:50.728134    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:50.732120    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:50.732120    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:50.732120    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:50.732120    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:50.732120    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:50.732120    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:50.732120    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:50 GMT
	I0719 05:57:50.732120    5884 round_trippers.go:580]     Audit-Id: 1a7f5c7e-7488-4d52-ad44-e05fddd8b827
	I0719 05:57:50.732873    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:51.221135    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hw9kh
	I0719 05:57:51.221135    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.221135    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.221135    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.226011    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:51.226011    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.226011    5884 round_trippers.go:580]     Audit-Id: 3b86e50a-2482-4847-9fb3-5a796c9c585e
	I0719 05:57:51.226223    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.226223    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.226223    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.226223    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.226223    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.227209    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1958","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6790 chars]
	I0719 05:57:51.228281    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:51.228281    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.228354    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.228354    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.230816    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:51.230816    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.230816    5884 round_trippers.go:580]     Audit-Id: 89f457d6-c211-4a02-aaf9-a26a3533e73a
	I0719 05:57:51.230816    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.230816    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.230816    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.230816    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.231567    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.231851    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:51.231983    5884 pod_ready.go:92] pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace has status "Ready":"True"
	I0719 05:57:51.231983    5884 pod_ready.go:81] duration metric: took 20.0212156s for pod "coredns-7db6d8ff4d-hw9kh" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.231983    5884 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.231983    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-761300
	I0719 05:57:51.231983    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.231983    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.231983    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.235153    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:51.235153    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.235153    5884 round_trippers.go:580]     Audit-Id: 34b272cd-7e34-499e-a365-7a2c0a63a4cb
	I0719 05:57:51.235153    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.235153    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.235153    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.235153    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.235153    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.235153    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-761300","namespace":"kube-system","uid":"296a455d-9236-4939-b002-5fa6dd843880","resourceVersion":"1908","creationTimestamp":"2024-07-19T05:57:16Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.28.162.149:2379","kubernetes.io/config.hash":"581155a4bfbbdcf98e106c8ce8e86c2b","kubernetes.io/config.mirror":"581155a4bfbbdcf98e106c8ce8e86c2b","kubernetes.io/config.seen":"2024-07-19T05:57:10.588894693Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:57:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6171 chars]
	I0719 05:57:51.236099    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:51.236099    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.236099    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.236099    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.239072    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:51.239072    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.239072    5884 round_trippers.go:580]     Audit-Id: 74b85c23-133e-4de2-ab47-73b5ca6090ec
	I0719 05:57:51.239072    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.239072    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.239072    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.239072    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.239072    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.239537    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:51.239954    5884 pod_ready.go:92] pod "etcd-multinode-761300" in "kube-system" namespace has status "Ready":"True"
	I0719 05:57:51.240041    5884 pod_ready.go:81] duration metric: took 8.0578ms for pod "etcd-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.240092    5884 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.240218    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-761300
	I0719 05:57:51.240218    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.240253    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.240253    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.243763    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:51.243763    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.244217    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.244217    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.244217    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.244280    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.244280    5884 round_trippers.go:580]     Audit-Id: 4a6ecf9c-fd19-4baa-808d-0098b9591c4b
	I0719 05:57:51.244329    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.244554    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-761300","namespace":"kube-system","uid":"89d493c7-c827-467c-ae64-9cdb2b5061df","resourceVersion":"1907","creationTimestamp":"2024-07-19T05:57:16Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.28.162.149:8443","kubernetes.io/config.hash":"b21ce007ca118b4c86324a165dd45eec","kubernetes.io/config.mirror":"b21ce007ca118b4c86324a165dd45eec","kubernetes.io/config.seen":"2024-07-19T05:57:10.501200307Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:57:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7705 chars]
	I0719 05:57:51.244902    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:51.244902    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.244902    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.244902    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.250533    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:51.250533    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.250533    5884 round_trippers.go:580]     Audit-Id: aad45296-9001-4f10-a912-fd9dff53633a
	I0719 05:57:51.251238    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.251238    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.251238    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.251238    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.251238    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.251440    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:51.251489    5884 pod_ready.go:92] pod "kube-apiserver-multinode-761300" in "kube-system" namespace has status "Ready":"True"
	I0719 05:57:51.251489    5884 pod_ready.go:81] duration metric: took 11.3965ms for pod "kube-apiserver-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.251489    5884 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.251489    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-761300
	I0719 05:57:51.251489    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.251489    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.251489    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.254301    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:51.254301    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.254301    5884 round_trippers.go:580]     Audit-Id: 8380bf7f-bf20-4aa2-8016-d9db909fbe69
	I0719 05:57:51.254301    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.254301    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.255067    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.255152    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.255152    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.255152    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-761300","namespace":"kube-system","uid":"2124834c-1961-49fb-8699-fba2fc5dd0ac","resourceVersion":"1898","creationTimestamp":"2024-07-19T05:33:02Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"91d2984bea90586f6ba6d94e358920eb","kubernetes.io/config.mirror":"91d2984bea90586f6ba6d94e358920eb","kubernetes.io/config.seen":"2024-07-19T05:33:02.001207967Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0719 05:57:51.255973    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:51.256002    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.256002    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.256002    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.259399    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:51.259399    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.259399    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.259399    5884 round_trippers.go:580]     Audit-Id: 1d67fa31-0e86-4ee5-993e-36f6d2ca3af4
	I0719 05:57:51.259399    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.259399    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.259399    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.259399    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.259399    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:51.260451    5884 pod_ready.go:92] pod "kube-controller-manager-multinode-761300" in "kube-system" namespace has status "Ready":"True"
	I0719 05:57:51.260482    5884 pod_ready.go:81] duration metric: took 8.993ms for pod "kube-controller-manager-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.260549    5884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c48b9" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.260626    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c48b9
	I0719 05:57:51.260626    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.260680    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.260680    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.262876    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:51.263747    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.263747    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.263747    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.263747    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.263747    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.263747    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.263747    5884 round_trippers.go:580]     Audit-Id: 79b33d47-672f-4b8e-b236-fd41057288d6
	I0719 05:57:51.264036    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c48b9","generateName":"kube-proxy-","namespace":"kube-system","uid":"67e2ee42-a2c4-4ed1-a2bf-840702a255b4","resourceVersion":"1764","creationTimestamp":"2024-07-19T05:41:15Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06c026b7-a7b7-4276-a86c-fc9c51f31e4e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:41:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06c026b7-a7b7-4276-a86c-fc9c51f31e4e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0719 05:57:51.264776    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300-m03
	I0719 05:57:51.264776    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.264776    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.264776    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.267236    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:51.268302    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.268374    5884 round_trippers.go:580]     Audit-Id: afc98435-0a92-4a19-9bf6-3d52aede6336
	I0719 05:57:51.268374    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.268374    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.268374    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.268374    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.268374    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.268374    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m03","uid":"b19fd562-f462-4172-835f-56c42463b282","resourceVersion":"1919","creationTimestamp":"2024-07-19T05:52:27Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_52_28_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:52:27Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4303 chars]
	I0719 05:57:51.268905    5884 pod_ready.go:97] node "multinode-761300-m03" hosting pod "kube-proxy-c48b9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300-m03" has status "Ready":"Unknown"
	I0719 05:57:51.268905    5884 pod_ready.go:81] duration metric: took 8.3565ms for pod "kube-proxy-c48b9" in "kube-system" namespace to be "Ready" ...
	E0719 05:57:51.268905    5884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-761300-m03" hosting pod "kube-proxy-c48b9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300-m03" has status "Ready":"Unknown"
	I0719 05:57:51.268905    5884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c4z7f" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.422778    5884 request.go:629] Waited for 153.6388ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4z7f
	I0719 05:57:51.423070    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c4z7f
	I0719 05:57:51.423070    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.423070    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.423070    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.426778    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:51.426778    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.426778    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.426778    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.426778    5884 round_trippers.go:580]     Audit-Id: 5e3c79db-d84d-42f3-b4c7-204e5ac5dd41
	I0719 05:57:51.427253    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.427253    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.427253    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.427535    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c4z7f","generateName":"kube-proxy-","namespace":"kube-system","uid":"17ff8aac-2d57-44fb-a3ec-f0d6ea181881","resourceVersion":"1888","creationTimestamp":"2024-07-19T05:33:15Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06c026b7-a7b7-4276-a86c-fc9c51f31e4e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06c026b7-a7b7-4276-a86c-fc9c51f31e4e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6039 chars]
	I0719 05:57:51.625209    5884 request.go:629] Waited for 197.5161ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:51.625416    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:51.625416    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.625416    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.625416    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.629787    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:51.629787    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.630210    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.630210    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.630210    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.630210    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.630210    5884 round_trippers.go:580]     Audit-Id: 6ea6f8b5-8e83-4064-87f2-9aef9763e761
	I0719 05:57:51.630210    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.630771    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:51.631847    5884 pod_ready.go:92] pod "kube-proxy-c4z7f" in "kube-system" namespace has status "Ready":"True"
	I0719 05:57:51.631930    5884 pod_ready.go:81] duration metric: took 362.9373ms for pod "kube-proxy-c4z7f" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.631930    5884 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mjv8l" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:51.828293    5884 request.go:629] Waited for 196.2434ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjv8l
	I0719 05:57:51.828293    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-proxy-mjv8l
	I0719 05:57:51.828293    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:51.828293    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:51.828293    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:51.832029    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:51.832900    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:51.832900    5884 round_trippers.go:580]     Audit-Id: a302d55d-8100-4c79-a14c-3996a03e2026
	I0719 05:57:51.832900    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:51.832900    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:51.832900    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:51.832900    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:51.832900    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:51 GMT
	I0719 05:57:51.833281    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-mjv8l","generateName":"kube-proxy-","namespace":"kube-system","uid":"4d0f7d34-4031-46d3-a580-a2d080d9d335","resourceVersion":"1787","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"06c026b7-a7b7-4276-a86c-fc9c51f31e4e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"06c026b7-a7b7-4276-a86c-fc9c51f31e4e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0719 05:57:52.031086    5884 request.go:629] Waited for 196.9262ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:57:52.031086    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300-m02
	I0719 05:57:52.031086    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:52.031086    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:52.031086    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:52.034804    5884 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0719 05:57:52.035434    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:52.035434    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:52.035434    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:52.035434    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:52 GMT
	I0719 05:57:52.035434    5884 round_trippers.go:580]     Audit-Id: 3bb17b35-539c-4992-b8d6-5dfcc1b3cac7
	I0719 05:57:52.035434    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:52.035434    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:52.035861    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300-m02","uid":"e4aebee9-899f-42b7-8668-f55979a037f8","resourceVersion":"1937","creationTimestamp":"2024-07-19T05:36:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_19T05_36_26_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:36:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-att [truncated 4583 chars]
	I0719 05:57:52.036411    5884 pod_ready.go:97] node "multinode-761300-m02" hosting pod "kube-proxy-mjv8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300-m02" has status "Ready":"Unknown"
	I0719 05:57:52.036411    5884 pod_ready.go:81] duration metric: took 404.4755ms for pod "kube-proxy-mjv8l" in "kube-system" namespace to be "Ready" ...
	E0719 05:57:52.036493    5884 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-761300-m02" hosting pod "kube-proxy-mjv8l" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-761300-m02" has status "Ready":"Unknown"
	I0719 05:57:52.036493    5884 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:52.234618    5884 request.go:629] Waited for 197.8535ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-761300
	I0719 05:57:52.234618    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-761300
	I0719 05:57:52.234618    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:52.234618    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:52.234618    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:52.239206    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:52.239479    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:52.239479    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:52.239479    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:52.239549    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:52.239549    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:52 GMT
	I0719 05:57:52.239549    5884 round_trippers.go:580]     Audit-Id: 5bfb3126-1f55-432e-a414-3f9e6d2444ff
	I0719 05:57:52.239549    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:52.239777    5884 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-761300","namespace":"kube-system","uid":"49a739d1-1ae3-4a41-aebc-0eb7b2b4f242","resourceVersion":"1924","creationTimestamp":"2024-07-19T05:33:02Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"baa57cf06d1c9cb3264d7de745e86d00","kubernetes.io/config.mirror":"baa57cf06d1c9cb3264d7de745e86d00","kubernetes.io/config.seen":"2024-07-19T05:33:02.001209067Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5205 chars]
	I0719 05:57:52.422255    5884 request.go:629] Waited for 181.6283ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:52.422255    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes/multinode-761300
	I0719 05:57:52.422502    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:52.422502    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:52.422580    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:52.428937    5884 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0719 05:57:52.429006    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:52.429141    5884 round_trippers.go:580]     Audit-Id: bc7ad7e0-2aa5-43a6-8f77-69bcc1088a8a
	I0719 05:57:52.429166    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:52.429166    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:52.429166    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:52.429166    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:52.429166    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:52 GMT
	I0719 05:57:52.430069    5884 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T05:32:58Z","fieldsType":"FieldsV1","f [truncated 5239 chars]
	I0719 05:57:52.430721    5884 pod_ready.go:92] pod "kube-scheduler-multinode-761300" in "kube-system" namespace has status "Ready":"True"
	I0719 05:57:52.430721    5884 pod_ready.go:81] duration metric: took 394.2235ms for pod "kube-scheduler-multinode-761300" in "kube-system" namespace to be "Ready" ...
	I0719 05:57:52.430721    5884 pod_ready.go:38] duration metric: took 21.2372723s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 05:57:52.430721    5884 api_server.go:52] waiting for apiserver process to appear ...
	I0719 05:57:52.444169    5884 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:57:52.472914    5884 command_runner.go:130] > 1971
	I0719 05:57:52.473512    5884 api_server.go:72] duration metric: took 31.641063s to wait for apiserver process to appear ...
	I0719 05:57:52.473512    5884 api_server.go:88] waiting for apiserver healthz status ...
	I0719 05:57:52.473512    5884 api_server.go:253] Checking apiserver healthz at https://172.28.162.149:8443/healthz ...
	I0719 05:57:52.482066    5884 api_server.go:279] https://172.28.162.149:8443/healthz returned 200:
	ok
	I0719 05:57:52.482808    5884 round_trippers.go:463] GET https://172.28.162.149:8443/version
	I0719 05:57:52.482808    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:52.482808    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:52.482927    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:52.485605    5884 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0719 05:57:52.485814    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:52.485902    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:52.485902    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:52.485902    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:52.485947    5884 round_trippers.go:580]     Content-Length: 263
	I0719 05:57:52.485947    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:52 GMT
	I0719 05:57:52.485947    5884 round_trippers.go:580]     Audit-Id: b820f40b-3f7b-46e0-9d59-77e09d96c67b
	I0719 05:57:52.485947    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:52.485947    5884 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0719 05:57:52.486023    5884 api_server.go:141] control plane version: v1.30.3
	I0719 05:57:52.486098    5884 api_server.go:131] duration metric: took 12.5857ms to wait for apiserver health ...
	I0719 05:57:52.486098    5884 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 05:57:52.629086    5884 request.go:629] Waited for 142.6691ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods
	I0719 05:57:52.629143    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods
	I0719 05:57:52.629143    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:52.629272    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:52.629272    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:52.641698    5884 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0719 05:57:52.642540    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:52.642540    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:52.642540    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:52.642540    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:52.642540    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:52.642540    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:52 GMT
	I0719 05:57:52.642540    5884 round_trippers.go:580]     Audit-Id: 6f7ac419-e62a-4de2-b3f2-b98f4439ac78
	I0719 05:57:52.645686    5884 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1962"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1958","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87033 chars]
	I0719 05:57:52.649688    5884 system_pods.go:59] 12 kube-system pods found
	I0719 05:57:52.649688    5884 system_pods.go:61] "coredns-7db6d8ff4d-hw9kh" [d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4] Running
	I0719 05:57:52.649688    5884 system_pods.go:61] "etcd-multinode-761300" [296a455d-9236-4939-b002-5fa6dd843880] Running
	I0719 05:57:52.649688    5884 system_pods.go:61] "kindnet-22ts9" [0d3c5a3b-fa22-4542-b9a5-478056ccc9cc] Running
	I0719 05:57:52.649688    5884 system_pods.go:61] "kindnet-6wxhn" [c0859b76-8ace-4de2-a940-4344594c5d27] Running
	I0719 05:57:52.649688    5884 system_pods.go:61] "kindnet-dj497" [124722d1-6c9c-4de4-b242-2f58e89b223b] Running
	I0719 05:57:52.649688    5884 system_pods.go:61] "kube-apiserver-multinode-761300" [89d493c7-c827-467c-ae64-9cdb2b5061df] Running
	I0719 05:57:52.649688    5884 system_pods.go:61] "kube-controller-manager-multinode-761300" [2124834c-1961-49fb-8699-fba2fc5dd0ac] Running
	I0719 05:57:52.650996    5884 system_pods.go:61] "kube-proxy-c48b9" [67e2ee42-a2c4-4ed1-a2bf-840702a255b4] Running
	I0719 05:57:52.650996    5884 system_pods.go:61] "kube-proxy-c4z7f" [17ff8aac-2d57-44fb-a3ec-f0d6ea181881] Running
	I0719 05:57:52.650996    5884 system_pods.go:61] "kube-proxy-mjv8l" [4d0f7d34-4031-46d3-a580-a2d080d9d335] Running
	I0719 05:57:52.650996    5884 system_pods.go:61] "kube-scheduler-multinode-761300" [49a739d1-1ae3-4a41-aebc-0eb7b2b4f242] Running
	I0719 05:57:52.650996    5884 system_pods.go:61] "storage-provisioner" [87c864ea-0853-481c-ab24-2ab209760f69] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 05:57:52.650996    5884 system_pods.go:74] duration metric: took 164.8963ms to wait for pod list to return data ...
	I0719 05:57:52.650996    5884 default_sa.go:34] waiting for default service account to be created ...
	I0719 05:57:52.831860    5884 request.go:629] Waited for 180.6946ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/namespaces/default/serviceaccounts
	I0719 05:57:52.831860    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/default/serviceaccounts
	I0719 05:57:52.831860    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:52.831860    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:52.831860    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:52.836455    5884 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0719 05:57:52.836561    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:52.836561    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:52 GMT
	I0719 05:57:52.836561    5884 round_trippers.go:580]     Audit-Id: c5c1b363-db93-4439-8355-30334e4075bc
	I0719 05:57:52.836561    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:52.836561    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:52.836561    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:52.836561    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:52.836561    5884 round_trippers.go:580]     Content-Length: 262
	I0719 05:57:52.836707    5884 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1962"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"401ce23d-5c82-4e9b-b140-9f6a95fa53e6","resourceVersion":"308","creationTimestamp":"2024-07-19T05:33:15Z"}}]}
	I0719 05:57:52.837038    5884 default_sa.go:45] found service account: "default"
	I0719 05:57:52.837119    5884 default_sa.go:55] duration metric: took 186.1206ms for default service account to be created ...
	I0719 05:57:52.837119    5884 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 05:57:53.035066    5884 request.go:629] Waited for 197.8603ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods
	I0719 05:57:53.035494    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/namespaces/kube-system/pods
	I0719 05:57:53.035494    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:53.035556    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:53.035556    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:53.042087    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:53.042087    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:53.042087    5884 round_trippers.go:580]     Audit-Id: 1b30c49a-6623-4e45-9966-8fd1b5cb47c9
	I0719 05:57:53.042087    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:53.042087    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:53.042087    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:53.042087    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:53.042087    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:53 GMT
	I0719 05:57:53.043399    5884 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1962"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-hw9kh","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4","resourceVersion":"1958","creationTimestamp":"2024-07-19T05:33:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"073ca72b-58fa-484e-b674-12ec750e663c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T05:33:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"073ca72b-58fa-484e-b674-12ec750e663c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87033 chars]
	I0719 05:57:53.046985    5884 system_pods.go:86] 12 kube-system pods found
	I0719 05:57:53.046985    5884 system_pods.go:89] "coredns-7db6d8ff4d-hw9kh" [d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "etcd-multinode-761300" [296a455d-9236-4939-b002-5fa6dd843880] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "kindnet-22ts9" [0d3c5a3b-fa22-4542-b9a5-478056ccc9cc] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "kindnet-6wxhn" [c0859b76-8ace-4de2-a940-4344594c5d27] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "kindnet-dj497" [124722d1-6c9c-4de4-b242-2f58e89b223b] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "kube-apiserver-multinode-761300" [89d493c7-c827-467c-ae64-9cdb2b5061df] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "kube-controller-manager-multinode-761300" [2124834c-1961-49fb-8699-fba2fc5dd0ac] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "kube-proxy-c48b9" [67e2ee42-a2c4-4ed1-a2bf-840702a255b4] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "kube-proxy-c4z7f" [17ff8aac-2d57-44fb-a3ec-f0d6ea181881] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "kube-proxy-mjv8l" [4d0f7d34-4031-46d3-a580-a2d080d9d335] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "kube-scheduler-multinode-761300" [49a739d1-1ae3-4a41-aebc-0eb7b2b4f242] Running
	I0719 05:57:53.046985    5884 system_pods.go:89] "storage-provisioner" [87c864ea-0853-481c-ab24-2ab209760f69] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0719 05:57:53.046985    5884 system_pods.go:126] duration metric: took 209.8641ms to wait for k8s-apps to be running ...
	I0719 05:57:53.046985    5884 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 05:57:53.064248    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 05:57:53.087450    5884 system_svc.go:56] duration metric: took 40.4641ms WaitForService to wait for kubelet
	I0719 05:57:53.087450    5884 kubeadm.go:582] duration metric: took 32.2549938s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 05:57:53.087450    5884 node_conditions.go:102] verifying NodePressure condition ...
	I0719 05:57:53.226268    5884 request.go:629] Waited for 138.8164ms due to client-side throttling, not priority and fairness, request: GET:https://172.28.162.149:8443/api/v1/nodes
	I0719 05:57:53.226268    5884 round_trippers.go:463] GET https://172.28.162.149:8443/api/v1/nodes
	I0719 05:57:53.226268    5884 round_trippers.go:469] Request Headers:
	I0719 05:57:53.226268    5884 round_trippers.go:473]     Accept: application/json, */*
	I0719 05:57:53.226268    5884 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0719 05:57:53.231834    5884 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0719 05:57:53.231834    5884 round_trippers.go:577] Response Headers:
	I0719 05:57:53.231834    5884 round_trippers.go:580]     Cache-Control: no-cache, private
	I0719 05:57:53.231834    5884 round_trippers.go:580]     Content-Type: application/json
	I0719 05:57:53.231834    5884 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: cf8bf203-e730-4364-8293-8a5b76c40d00
	I0719 05:57:53.231834    5884 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 24364547-876f-4052-9336-e70d7cd4bd0b
	I0719 05:57:53.232040    5884 round_trippers.go:580]     Date: Fri, 19 Jul 2024 05:57:53 GMT
	I0719 05:57:53.232040    5884 round_trippers.go:580]     Audit-Id: 2fa0269a-7d66-4710-a577-440d4fc894e5
	I0719 05:57:53.232418    5884 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1962"},"items":[{"metadata":{"name":"multinode-761300","uid":"d1ab7714-782e-4886-b5b6-209c141f087c","resourceVersion":"1929","creationTimestamp":"2024-07-19T05:32:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-761300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-761300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_19T05_33_03_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16163 chars]
	I0719 05:57:53.233510    5884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 05:57:53.233619    5884 node_conditions.go:123] node cpu capacity is 2
	I0719 05:57:53.233619    5884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 05:57:53.233619    5884 node_conditions.go:123] node cpu capacity is 2
	I0719 05:57:53.233619    5884 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0719 05:57:53.233619    5884 node_conditions.go:123] node cpu capacity is 2
	I0719 05:57:53.233619    5884 node_conditions.go:105] duration metric: took 146.1669ms to run NodePressure ...
	I0719 05:57:53.233619    5884 start.go:241] waiting for startup goroutines ...
	I0719 05:57:53.233619    5884 start.go:246] waiting for cluster config update ...
	I0719 05:57:53.233619    5884 start.go:255] writing updated cluster config ...
	I0719 05:57:53.239217    5884 out.go:177] 
	I0719 05:57:53.242551    5884 config.go:182] Loaded profile config "ha-062500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:57:53.254186    5884 config.go:182] Loaded profile config "multinode-761300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:57:53.254489    5884 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\config.json ...
	I0719 05:57:53.263237    5884 out.go:177] * Starting "multinode-761300-m02" worker node in "multinode-761300" cluster
	I0719 05:57:53.268139    5884 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 05:57:53.268139    5884 cache.go:56] Caching tarball of preloaded images
	I0719 05:57:53.268473    5884 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0719 05:57:53.268473    5884 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0719 05:57:53.268810    5884 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\config.json ...
	I0719 05:57:53.270587    5884 start.go:360] acquireMachinesLock for multinode-761300-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 05:57:53.270587    5884 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-761300-m02"
	I0719 05:57:53.271523    5884 start.go:96] Skipping create...Using existing machine configuration
	I0719 05:57:53.271523    5884 fix.go:54] fixHost starting: m02
	I0719 05:57:53.271745    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:57:55.507297    5884 main.go:141] libmachine: [stdout =====>] : Off
	
	I0719 05:57:55.507854    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:57:55.507854    5884 fix.go:112] recreateIfNeeded on multinode-761300-m02: state=Stopped err=<nil>
	W0719 05:57:55.507934    5884 fix.go:138] unexpected machine state, will restart: <nil>
	I0719 05:57:55.514661    5884 out.go:177] * Restarting existing hyperv VM for "multinode-761300-m02" ...
	I0719 05:57:55.519398    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-761300-m02
	I0719 05:57:58.705517    5884 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:57:58.705517    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:57:58.705517    5884 main.go:141] libmachine: Waiting for host to start...
	I0719 05:57:58.705517    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:01.060350    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:01.060446    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:01.060446    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:03.674968    5884 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:58:03.674968    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:04.681739    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:06.976738    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:06.976935    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:06.976935    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:09.600887    5884 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:58:09.600887    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:10.602595    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:12.929180    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:12.929180    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:12.929951    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:15.693600    5884 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:58:15.694601    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:16.701064    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:19.060925    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:19.060925    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:19.060925    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:21.673532    5884 main.go:141] libmachine: [stdout =====>] : 
	I0719 05:58:21.674525    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:22.678437    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:24.979345    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:24.980435    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:24.980495    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:27.627767    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:58:27.627767    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:27.631325    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:29.915707    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:29.915707    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:29.915880    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:32.523416    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:58:32.524470    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:32.524785    5884 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-761300\config.json ...
	I0719 05:58:32.527551    5884 machine.go:94] provisionDockerMachine start ...
	I0719 05:58:32.527672    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:34.782958    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:34.782958    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:34.783063    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:37.453792    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:58:37.453792    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:37.462364    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:58:37.462566    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.127 22 <nil> <nil>}
	I0719 05:58:37.462566    5884 main.go:141] libmachine: About to run SSH command:
	hostname
	I0719 05:58:37.583786    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0719 05:58:37.583786    5884 buildroot.go:166] provisioning hostname "multinode-761300-m02"
	I0719 05:58:37.583786    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:39.805060    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:39.805155    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:39.805155    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:42.472068    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:58:42.472068    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:42.477094    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:58:42.477890    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.127 22 <nil> <nil>}
	I0719 05:58:42.477890    5884 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-761300-m02 && echo "multinode-761300-m02" | sudo tee /etc/hostname
	I0719 05:58:42.643124    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-761300-m02
	
	I0719 05:58:42.643228    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:44.857252    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:44.857252    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:44.857645    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:47.457519    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:58:47.457519    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:47.466088    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:58:47.466777    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.127 22 <nil> <nil>}
	I0719 05:58:47.466777    5884 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-761300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-761300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-761300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 05:58:47.607112    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 05:58:47.607112    5884 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0719 05:58:47.607112    5884 buildroot.go:174] setting up certificates
	I0719 05:58:47.607112    5884 provision.go:84] configureAuth start
	I0719 05:58:47.607112    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:49.777125    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:49.777125    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:49.777125    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:52.378885    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:58:52.378885    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:52.378947    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:54.559435    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:54.559435    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:54.559515    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:58:57.194514    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:58:57.195517    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:57.195563    5884 provision.go:143] copyHostCerts
	I0719 05:58:57.195779    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0719 05:58:57.195954    5884 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0719 05:58:57.195954    5884 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0719 05:58:57.196610    5884 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0719 05:58:57.197935    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0719 05:58:57.197967    5884 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0719 05:58:57.197967    5884 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0719 05:58:57.198623    5884 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0719 05:58:57.199553    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0719 05:58:57.199553    5884 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0719 05:58:57.199553    5884 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0719 05:58:57.200124    5884 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0719 05:58:57.200736    5884 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-761300-m02 san=[127.0.0.1 172.28.162.127 localhost minikube multinode-761300-m02]
	I0719 05:58:57.616219    5884 provision.go:177] copyRemoteCerts
	I0719 05:58:57.629370    5884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 05:58:57.629370    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:58:59.847533    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:58:59.848327    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:58:59.848327    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:59:02.469729    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:59:02.469729    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:02.470871    5884 sshutil.go:53] new ssh client: &{IP:172.28.162.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02\id_rsa Username:docker}
	I0719 05:59:02.569724    5884 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9402542s)
	I0719 05:59:02.569792    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0719 05:59:02.570440    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0719 05:59:02.616254    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0719 05:59:02.616644    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 05:59:02.662800    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0719 05:59:02.663310    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0719 05:59:02.707759    5884 provision.go:87] duration metric: took 15.1004621s to configureAuth
	I0719 05:59:02.707939    5884 buildroot.go:189] setting minikube options for container-runtime
	I0719 05:59:02.708388    5884 config.go:182] Loaded profile config "multinode-761300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:59:02.708388    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:59:04.892415    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:59:04.893127    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:04.893303    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:59:07.554525    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:59:07.554525    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:07.560490    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:59:07.561298    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.127 22 <nil> <nil>}
	I0719 05:59:07.561298    5884 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 05:59:07.687263    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 05:59:07.687321    5884 buildroot.go:70] root file system type: tmpfs
	I0719 05:59:07.687598    5884 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 05:59:07.687656    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:59:09.900231    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:59:09.900231    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:09.901234    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:59:12.516794    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:59:12.516794    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:12.523844    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:59:12.524517    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.127 22 <nil> <nil>}
	I0719 05:59:12.524517    5884 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.28.162.149"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 05:59:12.683600    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.28.162.149
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 05:59:12.683756    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:59:14.899640    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:59:14.899804    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:14.899804    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:59:17.563280    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:59:17.563898    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:17.569640    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:59:17.569807    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.127 22 <nil> <nil>}
	I0719 05:59:17.569807    5884 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 05:59:20.074028    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 05:59:20.074100    5884 machine.go:97] duration metric: took 47.545969s to provisionDockerMachine
	I0719 05:59:20.074156    5884 start.go:293] postStartSetup for "multinode-761300-m02" (driver="hyperv")
	I0719 05:59:20.074156    5884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 05:59:20.086489    5884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 05:59:20.086489    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:59:22.308203    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:59:22.308203    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:22.309014    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:59:24.959341    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:59:24.959512    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:24.960039    5884 sshutil.go:53] new ssh client: &{IP:172.28.162.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02\id_rsa Username:docker}
	I0719 05:59:25.072245    5884 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9856949s)
	I0719 05:59:25.091228    5884 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 05:59:25.103592    5884 command_runner.go:130] > NAME=Buildroot
	I0719 05:59:25.103592    5884 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0719 05:59:25.103592    5884 command_runner.go:130] > ID=buildroot
	I0719 05:59:25.103592    5884 command_runner.go:130] > VERSION_ID=2023.02.9
	I0719 05:59:25.103592    5884 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0719 05:59:25.103945    5884 info.go:137] Remote host: Buildroot 2023.02.9
	I0719 05:59:25.103945    5884 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0719 05:59:25.104125    5884 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0719 05:59:25.105124    5884 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> 96042.pem in /etc/ssl/certs
	I0719 05:59:25.105124    5884 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem -> /etc/ssl/certs/96042.pem
	I0719 05:59:25.117301    5884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 05:59:25.140505    5884 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\96042.pem --> /etc/ssl/certs/96042.pem (1708 bytes)
	I0719 05:59:25.190121    5884 start.go:296] duration metric: took 5.115902s for postStartSetup
	I0719 05:59:25.190121    5884 fix.go:56] duration metric: took 1m31.9174757s for fixHost
	I0719 05:59:25.190121    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:59:27.454518    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:59:27.454783    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:27.454992    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:59:30.182789    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:59:30.182789    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:30.187447    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:59:30.188404    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.127 22 <nil> <nil>}
	I0719 05:59:30.188404    5884 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 05:59:30.309555    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721368770.321155784
	
	I0719 05:59:30.309555    5884 fix.go:216] guest clock: 1721368770.321155784
	I0719 05:59:30.309555    5884 fix.go:229] Guest: 2024-07-19 05:59:30.321155784 +0000 UTC Remote: 2024-07-19 05:59:25.190121 +0000 UTC m=+261.333707901 (delta=5.131034784s)
	I0719 05:59:30.309555    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:59:32.544384    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:59:32.544384    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:32.544384    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:59:35.231365    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:59:35.231365    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:35.237468    5884 main.go:141] libmachine: Using SSH client type: native
	I0719 05:59:35.237618    5884 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xa4aa40] 0xa4d620 <nil>  [] 0s} 172.28.162.127 22 <nil> <nil>}
	I0719 05:59:35.238161    5884 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1721368770
	I0719 05:59:35.369957    5884 main.go:141] libmachine: SSH cmd err, output: <nil>: Fri Jul 19 05:59:30 UTC 2024
	
	I0719 05:59:35.369957    5884 fix.go:236] clock set: Fri Jul 19 05:59:30 UTC 2024
	 (err=<nil>)
	I0719 05:59:35.370090    5884 start.go:83] releasing machines lock for "multinode-761300-m02", held for 1m42.0973651s
	I0719 05:59:35.370230    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:59:37.650768    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:59:37.650768    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:37.651612    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:59:40.291332    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:59:40.291332    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:40.294470    5884 out.go:177] * Found network options:
	I0719 05:59:40.302865    5884 out.go:177]   - NO_PROXY=172.28.162.149
	W0719 05:59:40.305863    5884 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 05:59:40.308223    5884 out.go:177]   - NO_PROXY=172.28.162.149
	W0719 05:59:40.310928    5884 proxy.go:119] fail to check proxy env: Error ip not in block
	W0719 05:59:40.312870    5884 proxy.go:119] fail to check proxy env: Error ip not in block
	I0719 05:59:40.316429    5884 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0719 05:59:40.316429    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:59:40.329388    5884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0719 05:59:40.329388    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:59:42.678671    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:59:42.678671    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:42.678780    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:59:42.678975    5884 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:59:42.679041    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:42.679041    5884 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:59:45.467002    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:59:45.467002    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:45.468098    5884 sshutil.go:53] new ssh client: &{IP:172.28.162.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02\id_rsa Username:docker}
	I0719 05:59:45.493480    5884 main.go:141] libmachine: [stdout =====>] : 172.28.162.127
	
	I0719 05:59:45.493480    5884 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:59:45.494059    5884 sshutil.go:53] new ssh client: &{IP:172.28.162.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02\id_rsa Username:docker}
	I0719 05:59:45.558549    5884 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0719 05:59:45.559003    5884 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.2425101s)
	W0719 05:59:45.559116    5884 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0719 05:59:45.593539    5884 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0719 05:59:45.594307    5884 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2648552s)
	W0719 05:59:45.594412    5884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 05:59:45.606859    5884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 05:59:45.637616    5884 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0719 05:59:45.637702    5884 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 05:59:45.637702    5884 start.go:495] detecting cgroup driver to use...
	I0719 05:59:45.638010    5884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0719 05:59:45.657791    5884 out.go:239] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0719 05:59:45.658237    5884 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0719 05:59:45.678348    5884 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0719 05:59:45.689712    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 05:59:45.724483    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 05:59:45.745891    5884 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 05:59:45.756738    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 05:59:45.792026    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 05:59:45.824702    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 05:59:45.855332    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 05:59:45.887087    5884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 05:59:45.916907    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 05:59:45.949875    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0719 05:59:45.982578    5884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0719 05:59:46.016856    5884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 05:59:46.036160    5884 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0719 05:59:46.049255    5884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 05:59:46.080316    5884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:59:46.276689    5884 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 05:59:46.312424    5884 start.go:495] detecting cgroup driver to use...
	I0719 05:59:46.325676    5884 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 05:59:46.350125    5884 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0719 05:59:46.350279    5884 command_runner.go:130] > [Unit]
	I0719 05:59:46.350279    5884 command_runner.go:130] > Description=Docker Application Container Engine
	I0719 05:59:46.350279    5884 command_runner.go:130] > Documentation=https://docs.docker.com
	I0719 05:59:46.350279    5884 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0719 05:59:46.350279    5884 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0719 05:59:46.350279    5884 command_runner.go:130] > StartLimitBurst=3
	I0719 05:59:46.350279    5884 command_runner.go:130] > StartLimitIntervalSec=60
	I0719 05:59:46.350279    5884 command_runner.go:130] > [Service]
	I0719 05:59:46.350279    5884 command_runner.go:130] > Type=notify
	I0719 05:59:46.350279    5884 command_runner.go:130] > Restart=on-failure
	I0719 05:59:46.350279    5884 command_runner.go:130] > Environment=NO_PROXY=172.28.162.149
	I0719 05:59:46.350279    5884 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0719 05:59:46.350279    5884 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0719 05:59:46.350279    5884 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0719 05:59:46.350279    5884 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0719 05:59:46.350279    5884 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0719 05:59:46.350279    5884 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0719 05:59:46.350279    5884 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0719 05:59:46.350279    5884 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0719 05:59:46.350279    5884 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0719 05:59:46.350279    5884 command_runner.go:130] > ExecStart=
	I0719 05:59:46.350279    5884 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0719 05:59:46.350279    5884 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0719 05:59:46.350279    5884 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0719 05:59:46.350279    5884 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0719 05:59:46.350279    5884 command_runner.go:130] > LimitNOFILE=infinity
	I0719 05:59:46.350279    5884 command_runner.go:130] > LimitNPROC=infinity
	I0719 05:59:46.350279    5884 command_runner.go:130] > LimitCORE=infinity
	I0719 05:59:46.350279    5884 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0719 05:59:46.350279    5884 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0719 05:59:46.350279    5884 command_runner.go:130] > TasksMax=infinity
	I0719 05:59:46.350279    5884 command_runner.go:130] > TimeoutStartSec=0
	I0719 05:59:46.350279    5884 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0719 05:59:46.350279    5884 command_runner.go:130] > Delegate=yes
	I0719 05:59:46.350279    5884 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0719 05:59:46.350279    5884 command_runner.go:130] > KillMode=process
	I0719 05:59:46.350279    5884 command_runner.go:130] > [Install]
	I0719 05:59:46.350279    5884 command_runner.go:130] > WantedBy=multi-user.target
	I0719 05:59:46.368276    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 05:59:46.405356    5884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 05:59:46.445316    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 05:59:46.483740    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 05:59:46.526536    5884 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 05:59:46.604602    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 05:59:46.628481    5884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 05:59:46.661586    5884 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0719 05:59:46.673631    5884 ssh_runner.go:195] Run: which cri-dockerd
	I0719 05:59:46.682735    5884 command_runner.go:130] > /usr/bin/cri-dockerd
	I0719 05:59:46.695342    5884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 05:59:46.713282    5884 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 05:59:46.756269    5884 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 05:59:46.969335    5884 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 05:59:47.160084    5884 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 05:59:47.160197    5884 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0719 05:59:47.206611    5884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:59:47.401056    5884 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 05:59:50.103981    5884 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7028923s)
	I0719 05:59:50.115909    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0719 05:59:50.151736    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 05:59:50.188581    5884 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 05:59:50.389292    5884 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 05:59:50.581727    5884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:59:50.776998    5884 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 05:59:50.824550    5884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0719 05:59:50.861360    5884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 05:59:51.061188    5884 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0719 05:59:51.172050    5884 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 05:59:51.184438    5884 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 05:59:51.194527    5884 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0719 05:59:51.194964    5884 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0719 05:59:51.194964    5884 command_runner.go:130] > Device: 0,22	Inode: 856         Links: 1
	I0719 05:59:51.194964    5884 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0719 05:59:51.194964    5884 command_runner.go:130] > Access: 2024-07-19 05:59:51.098436496 +0000
	I0719 05:59:51.194964    5884 command_runner.go:130] > Modify: 2024-07-19 05:59:51.098436496 +0000
	I0719 05:59:51.194964    5884 command_runner.go:130] > Change: 2024-07-19 05:59:51.102436544 +0000
	I0719 05:59:51.194964    5884 command_runner.go:130] >  Birth: -
	I0719 05:59:51.195090    5884 start.go:563] Will wait 60s for crictl version
	I0719 05:59:51.207421    5884 ssh_runner.go:195] Run: which crictl
	I0719 05:59:51.213864    5884 command_runner.go:130] > /usr/bin/crictl
	I0719 05:59:51.226361    5884 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 05:59:51.279187    5884 command_runner.go:130] > Version:  0.1.0
	I0719 05:59:51.279269    5884 command_runner.go:130] > RuntimeName:  docker
	I0719 05:59:51.279269    5884 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0719 05:59:51.279269    5884 command_runner.go:130] > RuntimeApiVersion:  v1
	I0719 05:59:51.279269    5884 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0719 05:59:51.288593    5884 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 05:59:51.321417    5884 command_runner.go:130] > 27.0.3
	I0719 05:59:51.331404    5884 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 05:59:51.363963    5884 command_runner.go:130] > 27.0.3
	I0719 05:59:51.369890    5884 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0719 05:59:51.376939    5884 out.go:177]   - env NO_PROXY=172.28.162.149
	
	
	==> Docker <==
	Jul 19 05:57:48 multinode-761300 dockerd[1093]: time="2024-07-19T05:57:48.776116631Z" level=info msg="ignoring event" container=520b9666040c4cabbb4b07a10b1fc8bdc3937905c11ff2bc10e2a11b7b77f315 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 05:57:48 multinode-761300 dockerd[1099]: time="2024-07-19T05:57:48.776927932Z" level=warning msg="cleaning up after shim disconnected" id=520b9666040c4cabbb4b07a10b1fc8bdc3937905c11ff2bc10e2a11b7b77f315 namespace=moby
	Jul 19 05:57:48 multinode-761300 dockerd[1099]: time="2024-07-19T05:57:48.777178432Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 05:57:48 multinode-761300 dockerd[1099]: time="2024-07-19T05:57:48.987758393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 05:57:48 multinode-761300 dockerd[1099]: time="2024-07-19T05:57:48.987959494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 05:57:48 multinode-761300 dockerd[1099]: time="2024-07-19T05:57:48.987978394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 05:57:48 multinode-761300 dockerd[1099]: time="2024-07-19T05:57:48.990984896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 05:57:48 multinode-761300 dockerd[1099]: time="2024-07-19T05:57:48.987366093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 05:57:48 multinode-761300 dockerd[1099]: time="2024-07-19T05:57:48.997358501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 05:57:48 multinode-761300 dockerd[1099]: time="2024-07-19T05:57:48.997381701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 05:57:48 multinode-761300 dockerd[1099]: time="2024-07-19T05:57:48.997665401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 05:57:49 multinode-761300 cri-dockerd[1361]: time="2024-07-19T05:57:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1274088de826371d3709c4277cc42270ac24f5e9c3fbc4a117c5af4ea38826e8/resolv.conf as [nameserver 172.28.160.1]"
	Jul 19 05:57:49 multinode-761300 cri-dockerd[1361]: time="2024-07-19T05:57:49Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/94c6b9d21b26795e0116c8738fef7cd707ea0c09acefcde3265184173435384a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 19 05:57:49 multinode-761300 dockerd[1099]: time="2024-07-19T05:57:49.561310840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 05:57:49 multinode-761300 dockerd[1099]: time="2024-07-19T05:57:49.561637840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 05:57:49 multinode-761300 dockerd[1099]: time="2024-07-19T05:57:49.562631741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 05:57:49 multinode-761300 dockerd[1099]: time="2024-07-19T05:57:49.562857942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 05:57:49 multinode-761300 dockerd[1099]: time="2024-07-19T05:57:49.572728651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 05:57:49 multinode-761300 dockerd[1099]: time="2024-07-19T05:57:49.572785251Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 05:57:49 multinode-761300 dockerd[1099]: time="2024-07-19T05:57:49.572797351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 05:57:49 multinode-761300 dockerd[1099]: time="2024-07-19T05:57:49.572945352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 05:58:01 multinode-761300 dockerd[1099]: time="2024-07-19T05:58:01.856244511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 05:58:01 multinode-761300 dockerd[1099]: time="2024-07-19T05:58:01.856447109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 05:58:01 multinode-761300 dockerd[1099]: time="2024-07-19T05:58:01.857150505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 05:58:01 multinode-761300 dockerd[1099]: time="2024-07-19T05:58:01.857723102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	76f426b1b9ce2       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       2                   91cb750f1a789       storage-provisioner
	b0a0a274211d5       8c811b4aec35f                                                                                         2 minutes ago       Running             busybox                   1                   94c6b9d21b267       busybox-fc5497c4f-n4tql
	3ea6a858f966a       cbb01a7bd410d                                                                                         2 minutes ago       Running             coredns                   1                   1274088de8263       coredns-7db6d8ff4d-hw9kh
	7c6fd1bcdccdc       5cc3abe5717db                                                                                         2 minutes ago       Running             kindnet-cni               1                   e9d0c4b92716b       kindnet-dj497
	2391c8e68ac52       55bb025d2cfa5                                                                                         2 minutes ago       Running             kube-proxy                1                   abfbbf60c503e       kube-proxy-c4z7f
	520b9666040c4       6e38f40d628db                                                                                         2 minutes ago       Exited              storage-provisioner       1                   91cb750f1a789       storage-provisioner
	2d06048c3816c       3861cfcd7c04c                                                                                         3 minutes ago       Running             etcd                      0                   6dab2f598ca17       etcd-multinode-761300
	c6a8b5b3f1561       1f6d574d502f3                                                                                         3 minutes ago       Running             kube-apiserver            0                   6e4c6cbc695ea       kube-apiserver-multinode-761300
	aee640f43640c       3edc18e7b7672                                                                                         3 minutes ago       Running             kube-scheduler            1                   95c4bb4966ec5       kube-scheduler-multinode-761300
	aa4a741a5c9f6       76932a3b37d7e                                                                                         3 minutes ago       Running             kube-controller-manager   1                   5acce15770c28       kube-controller-manager-multinode-761300
	4a5a7f7d7c88b       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   22 minutes ago      Exited              busybox                   0                   3376af93be166       busybox-fc5497c4f-n4tql
	17479f193bde6       cbb01a7bd410d                                                                                         26 minutes ago      Exited              coredns                   0                   8880cece050b3       coredns-7db6d8ff4d-hw9kh
	81297ef97ccfe       kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493              26 minutes ago      Exited              kindnet-cni               0                   342774c2cfe86       kindnet-dj497
	c7f3e45f7ac5a       55bb025d2cfa5                                                                                         26 minutes ago      Exited              kube-proxy                0                   605bd6887ea94       kube-proxy-c4z7f
	1e25c1f162f5c       3edc18e7b7672                                                                                         27 minutes ago      Exited              kube-scheduler            0                   b8966b015c45c       kube-scheduler-multinode-761300
	86b38e87981e5       76932a3b37d7e                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   20495b8d48375       kube-controller-manager-multinode-761300
	
	
	==> coredns [17479f193bde] <==
	[INFO] 10.244.0.3:56803 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000119402s
	[INFO] 10.244.0.3:49469 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000167302s
	[INFO] 10.244.0.3:55677 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113101s
	[INFO] 10.244.0.3:45799 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000153001s
	[INFO] 10.244.0.3:34957 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000269103s
	[INFO] 10.244.0.3:42013 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000098001s
	[INFO] 10.244.0.3:52144 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000172802s
	[INFO] 10.244.1.2:33742 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163902s
	[INFO] 10.244.1.2:34795 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000317004s
	[INFO] 10.244.1.2:43217 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000137402s
	[INFO] 10.244.1.2:55546 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000224302s
	[INFO] 10.244.0.3:55937 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000239803s
	[INFO] 10.244.0.3:48596 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000068101s
	[INFO] 10.244.0.3:47339 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000077101s
	[INFO] 10.244.0.3:45789 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163303s
	[INFO] 10.244.1.2:53057 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140801s
	[INFO] 10.244.1.2:49936 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000138202s
	[INFO] 10.244.1.2:51934 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000086001s
	[INFO] 10.244.1.2:50345 - 5 "PTR IN 1.160.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093801s
	[INFO] 10.244.0.3:38065 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000244803s
	[INFO] 10.244.0.3:42402 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000454005s
	[INFO] 10.244.0.3:54728 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000466406s
	[INFO] 10.244.0.3:52215 - 5 "PTR IN 1.160.28.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000087501s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3ea6a858f966] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7f5c0142d9355ec4970b374c78373c500f701bd80cdb6426f96a5673e8cfb1bf63ab9d981830721714466276f42f4b9b60f28f050bcccaa4663b1f4f6260a7ca
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57342 - 65228 "HINFO IN 3615639591517857754.4071923492147253562. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.064331703s
	
	
	==> describe nodes <==
	Name:               multinode-761300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-761300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=multinode-761300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_19T05_33_03_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 05:32:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-761300
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 06:00:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 05:57:30 +0000   Fri, 19 Jul 2024 05:32:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 05:57:30 +0000   Fri, 19 Jul 2024 05:32:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 05:57:30 +0000   Fri, 19 Jul 2024 05:32:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 05:57:30 +0000   Fri, 19 Jul 2024 05:57:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.28.162.149
	  Hostname:    multinode-761300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 11d96d7eed23495c9b2b9e22c16989f9
	  System UUID:                802f23c7-7e66-2447-8cf2-4f28d0672512
	  Boot ID:                    906a6fbb-8946-404c-aa0f-dea8bec48447
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-n4tql                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-7db6d8ff4d-hw9kh                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-multinode-761300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m58s
	  kube-system                 kindnet-dj497                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-multinode-761300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	  kube-system                 kube-controller-manager-multinode-761300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-proxy-c4z7f                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-multinode-761300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 26m                  kube-proxy       
	  Normal  Starting                 2m55s                kube-proxy       
	  Normal  Starting                 27m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     27m                  kubelet          Node multinode-761300 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  27m                  kubelet          Node multinode-761300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                  kubelet          Node multinode-761300 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  27m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27m                  node-controller  Node multinode-761300 event: Registered Node multinode-761300 in Controller
	  Normal  NodeReady                26m                  kubelet          Node multinode-761300 status is now: NodeReady
	  Normal  Starting                 3m4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m4s (x8 over 3m4s)  kubelet          Node multinode-761300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m4s (x8 over 3m4s)  kubelet          Node multinode-761300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m4s (x7 over 3m4s)  kubelet          Node multinode-761300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m45s                node-controller  Node multinode-761300 event: Registered Node multinode-761300 in Controller
	
	
	Name:               multinode-761300-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-761300-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=multinode-761300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T05_36_26_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 05:36:25 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-761300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 05:53:56 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 19 Jul 2024 05:52:44 +0000   Fri, 19 Jul 2024 05:54:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 19 Jul 2024 05:52:44 +0000   Fri, 19 Jul 2024 05:54:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 19 Jul 2024 05:52:44 +0000   Fri, 19 Jul 2024 05:54:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 19 Jul 2024 05:52:44 +0000   Fri, 19 Jul 2024 05:54:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.28.167.151
	  Hostname:    multinode-761300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 c918f138aaff43f498eaacc539897bb1
	  System UUID:                62e15326-3b75-2c4c-8a83-de31ba1535c2
	  Boot ID:                    a1a8f8ea-3af9-4d28-b846-189f682f48fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-22cdf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kindnet-6wxhn              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-proxy-mjv8l           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x2 over 23m)  kubelet          Node multinode-761300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x2 over 23m)  kubelet          Node multinode-761300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x2 over 23m)  kubelet          Node multinode-761300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                node-controller  Node multinode-761300-m02 event: Registered Node multinode-761300-m02 in Controller
	  Normal  NodeReady                23m                kubelet          Node multinode-761300-m02 status is now: NodeReady
	  Normal  NodeNotReady             5m34s              node-controller  Node multinode-761300-m02 status is now: NodeNotReady
	  Normal  RegisteredNode           2m45s              node-controller  Node multinode-761300-m02 event: Registered Node multinode-761300-m02 in Controller
	
	
	Name:               multinode-761300-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-761300-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=multinode-761300
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_19T05_52_28_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 05:52:27 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-761300-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 05:53:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 19 Jul 2024 05:52:44 +0000   Fri, 19 Jul 2024 05:54:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 19 Jul 2024 05:52:44 +0000   Fri, 19 Jul 2024 05:54:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 19 Jul 2024 05:52:44 +0000   Fri, 19 Jul 2024 05:54:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 19 Jul 2024 05:52:44 +0000   Fri, 19 Jul 2024 05:54:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.28.165.227
	  Hostname:    multinode-761300-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1c7a928394746d9a8801069c3fc821c
	  System UUID:                fcea77f0-512f-3144-a5d0-e5d669aed6af
	  Boot ID:                    d0bc93b2-3f18-4eb9-82db-86d70e27488a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-22ts9       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-proxy-c48b9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 18m                    kube-proxy       
	  Normal  Starting                 7m43s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  18m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m (x2 over 18m)      kubelet          Node multinode-761300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x2 over 18m)      kubelet          Node multinode-761300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x2 over 18m)      kubelet          Node multinode-761300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m                    kubelet          Node multinode-761300-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  7m47s (x2 over 7m47s)  kubelet          Node multinode-761300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m47s (x2 over 7m47s)  kubelet          Node multinode-761300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m47s (x2 over 7m47s)  kubelet          Node multinode-761300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m44s                  node-controller  Node multinode-761300-m03 event: Registered Node multinode-761300-m03 in Controller
	  Normal  NodeReady                7m30s                  kubelet          Node multinode-761300-m03 status is now: NodeReady
	  Normal  NodeNotReady             5m54s                  node-controller  Node multinode-761300-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           2m45s                  node-controller  Node multinode-761300-m03 event: Registered Node multinode-761300-m03 in Controller
	
	
	==> dmesg <==
	[  +1.273749] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.161037] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.930190] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul19 05:56] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.193564] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[Jul19 05:57] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	[  +0.099624] kauditd_printk_skb: 71 callbacks suppressed
	[  +0.535693] systemd-fstab-generator[1059]: Ignoring "noauto" option for root device
	[  +0.206164] systemd-fstab-generator[1071]: Ignoring "noauto" option for root device
	[  +0.215784] systemd-fstab-generator[1085]: Ignoring "noauto" option for root device
	[  +3.021619] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +0.191521] systemd-fstab-generator[1326]: Ignoring "noauto" option for root device
	[  +0.214818] systemd-fstab-generator[1338]: Ignoring "noauto" option for root device
	[  +0.269629] systemd-fstab-generator[1353]: Ignoring "noauto" option for root device
	[  +0.873630] systemd-fstab-generator[1475]: Ignoring "noauto" option for root device
	[  +0.095895] kauditd_printk_skb: 202 callbacks suppressed
	[  +3.565168] systemd-fstab-generator[1616]: Ignoring "noauto" option for root device
	[  +1.477459] kauditd_printk_skb: 54 callbacks suppressed
	[  +5.591339] kauditd_printk_skb: 20 callbacks suppressed
	[  +3.695922] systemd-fstab-generator[2444]: Ignoring "noauto" option for root device
	[  +8.152720] kauditd_printk_skb: 70 callbacks suppressed
	[Jul19 05:58] kauditd_printk_skb: 15 callbacks suppressed
	[ +25.263854] hrtimer: interrupt took 2299893 ns
	
	
	==> etcd [2d06048c3816] <==
	{"level":"info","ts":"2024-07-19T05:57:12.786372Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"ea87c63def213d9a","initial-advertise-peer-urls":["https://172.28.162.149:2380"],"listen-peer-urls":["https://172.28.162.149:2380"],"advertise-client-urls":["https://172.28.162.149:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.28.162.149:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T05:57:12.786484Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T05:57:12.787349Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T05:57:12.788112Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T05:57:12.788216Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T05:57:12.788789Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea87c63def213d9a switched to configuration voters=(16899694096038313370)"}
	{"level":"info","ts":"2024-07-19T05:57:12.788886Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"600e41776d6e5bf4","local-member-id":"ea87c63def213d9a","added-peer-id":"ea87c63def213d9a","added-peer-peer-urls":["https://172.28.162.16:2380"]}
	{"level":"info","ts":"2024-07-19T05:57:12.788996Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"600e41776d6e5bf4","local-member-id":"ea87c63def213d9a","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T05:57:12.789069Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T05:57:12.787661Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.28.162.149:2380"}
	{"level":"info","ts":"2024-07-19T05:57:12.791671Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.28.162.149:2380"}
	{"level":"info","ts":"2024-07-19T05:57:13.933075Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea87c63def213d9a is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T05:57:13.933162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea87c63def213d9a became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T05:57:13.934807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea87c63def213d9a received MsgPreVoteResp from ea87c63def213d9a at term 2"}
	{"level":"info","ts":"2024-07-19T05:57:13.934861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea87c63def213d9a became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T05:57:13.934953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea87c63def213d9a received MsgVoteResp from ea87c63def213d9a at term 3"}
	{"level":"info","ts":"2024-07-19T05:57:13.936316Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea87c63def213d9a became leader at term 3"}
	{"level":"info","ts":"2024-07-19T05:57:13.936434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea87c63def213d9a elected leader ea87c63def213d9a at term 3"}
	{"level":"info","ts":"2024-07-19T05:57:13.940892Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"ea87c63def213d9a","local-member-attributes":"{Name:multinode-761300 ClientURLs:[https://172.28.162.149:2379]}","request-path":"/0/members/ea87c63def213d9a/attributes","cluster-id":"600e41776d6e5bf4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T05:57:13.94111Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T05:57:13.941957Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T05:57:13.942186Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T05:57:13.941232Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T05:57:13.948028Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.28.162.149:2379"}
	{"level":"info","ts":"2024-07-19T05:57:13.948285Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 06:00:15 up 4 min,  0 users,  load average: 0.13, 0.20, 0.09
	Linux multinode-761300 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [7c6fd1bcdccd] <==
	I0719 05:59:30.121864       1 main.go:326] Node multinode-761300-m02 has CIDR [10.244.1.0/24] 
	I0719 05:59:40.123346       1 main.go:299] Handling node with IPs: map[172.28.162.149:{}]
	I0719 05:59:40.123450       1 main.go:303] handling current node
	I0719 05:59:40.123470       1 main.go:299] Handling node with IPs: map[172.28.167.151:{}]
	I0719 05:59:40.123478       1 main.go:326] Node multinode-761300-m02 has CIDR [10.244.1.0/24] 
	I0719 05:59:40.123778       1 main.go:299] Handling node with IPs: map[172.28.165.227:{}]
	I0719 05:59:40.123804       1 main.go:326] Node multinode-761300-m03 has CIDR [10.244.3.0/24] 
	I0719 05:59:50.117264       1 main.go:299] Handling node with IPs: map[172.28.162.149:{}]
	I0719 05:59:50.117504       1 main.go:303] handling current node
	I0719 05:59:50.117672       1 main.go:299] Handling node with IPs: map[172.28.167.151:{}]
	I0719 05:59:50.117740       1 main.go:326] Node multinode-761300-m02 has CIDR [10.244.1.0/24] 
	I0719 05:59:50.118044       1 main.go:299] Handling node with IPs: map[172.28.165.227:{}]
	I0719 05:59:50.118169       1 main.go:326] Node multinode-761300-m03 has CIDR [10.244.3.0/24] 
	I0719 06:00:00.124324       1 main.go:299] Handling node with IPs: map[172.28.167.151:{}]
	I0719 06:00:00.124431       1 main.go:326] Node multinode-761300-m02 has CIDR [10.244.1.0/24] 
	I0719 06:00:00.125024       1 main.go:299] Handling node with IPs: map[172.28.165.227:{}]
	I0719 06:00:00.125042       1 main.go:326] Node multinode-761300-m03 has CIDR [10.244.3.0/24] 
	I0719 06:00:00.125108       1 main.go:299] Handling node with IPs: map[172.28.162.149:{}]
	I0719 06:00:00.125201       1 main.go:303] handling current node
	I0719 06:00:10.125076       1 main.go:299] Handling node with IPs: map[172.28.165.227:{}]
	I0719 06:00:10.125233       1 main.go:326] Node multinode-761300-m03 has CIDR [10.244.3.0/24] 
	I0719 06:00:10.125608       1 main.go:299] Handling node with IPs: map[172.28.162.149:{}]
	I0719 06:00:10.125678       1 main.go:303] handling current node
	I0719 06:00:10.125816       1 main.go:299] Handling node with IPs: map[172.28.167.151:{}]
	I0719 06:00:10.125874       1 main.go:326] Node multinode-761300-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [81297ef97ccf] <==
	I0719 05:53:55.456630       1 main.go:303] handling current node
	I0719 05:54:05.456961       1 main.go:299] Handling node with IPs: map[172.28.162.16:{}]
	I0719 05:54:05.457013       1 main.go:303] handling current node
	I0719 05:54:05.457050       1 main.go:299] Handling node with IPs: map[172.28.167.151:{}]
	I0719 05:54:05.457073       1 main.go:326] Node multinode-761300-m02 has CIDR [10.244.1.0/24] 
	I0719 05:54:05.457448       1 main.go:299] Handling node with IPs: map[172.28.165.227:{}]
	I0719 05:54:05.457590       1 main.go:326] Node multinode-761300-m03 has CIDR [10.244.3.0/24] 
	I0719 05:54:15.462262       1 main.go:299] Handling node with IPs: map[172.28.162.16:{}]
	I0719 05:54:15.462364       1 main.go:303] handling current node
	I0719 05:54:15.462385       1 main.go:299] Handling node with IPs: map[172.28.167.151:{}]
	I0719 05:54:15.462393       1 main.go:326] Node multinode-761300-m02 has CIDR [10.244.1.0/24] 
	I0719 05:54:15.462580       1 main.go:299] Handling node with IPs: map[172.28.165.227:{}]
	I0719 05:54:15.462954       1 main.go:326] Node multinode-761300-m03 has CIDR [10.244.3.0/24] 
	I0719 05:54:25.456381       1 main.go:299] Handling node with IPs: map[172.28.162.16:{}]
	I0719 05:54:25.456525       1 main.go:303] handling current node
	I0719 05:54:25.456547       1 main.go:299] Handling node with IPs: map[172.28.167.151:{}]
	I0719 05:54:25.456554       1 main.go:326] Node multinode-761300-m02 has CIDR [10.244.1.0/24] 
	I0719 05:54:25.456921       1 main.go:299] Handling node with IPs: map[172.28.165.227:{}]
	I0719 05:54:25.456977       1 main.go:326] Node multinode-761300-m03 has CIDR [10.244.3.0/24] 
	I0719 05:54:35.461717       1 main.go:299] Handling node with IPs: map[172.28.162.16:{}]
	I0719 05:54:35.461820       1 main.go:303] handling current node
	I0719 05:54:35.461843       1 main.go:299] Handling node with IPs: map[172.28.167.151:{}]
	I0719 05:54:35.461851       1 main.go:326] Node multinode-761300-m02 has CIDR [10.244.1.0/24] 
	I0719 05:54:35.462388       1 main.go:299] Handling node with IPs: map[172.28.165.227:{}]
	I0719 05:54:35.462485       1 main.go:326] Node multinode-761300-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c6a8b5b3f156] <==
	I0719 05:57:15.599135       1 aggregator.go:165] initial CRD sync complete...
	I0719 05:57:15.599350       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 05:57:15.599582       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 05:57:15.656583       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 05:57:15.674401       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 05:57:15.676738       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 05:57:15.678787       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 05:57:15.679856       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0719 05:57:15.680045       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0719 05:57:15.682468       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0719 05:57:15.685350       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 05:57:15.685803       1 policy_source.go:224] refreshing policies
	I0719 05:57:15.697739       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 05:57:15.700551       1 cache.go:39] Caches are synced for autoregister controller
	I0719 05:57:15.713254       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 05:57:16.482800       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0719 05:57:17.202327       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.162.149 172.28.162.16]
	I0719 05:57:17.204130       1 controller.go:615] quota admission added evaluator for: endpoints
	I0719 05:57:17.228118       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 05:57:18.804998       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 05:57:19.013641       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 05:57:19.033458       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 05:57:19.150319       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 05:57:19.161924       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0719 05:57:37.207553       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.28.162.149]
	
	
	==> kube-controller-manager [86b38e87981e] <==
	I0719 05:36:25.905178       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-761300-m02\" does not exist"
	I0719 05:36:25.920845       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-761300-m02" podCIDRs=["10.244.1.0/24"]
	I0719 05:36:29.924369       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-761300-m02"
	I0719 05:36:54.913426       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-761300-m02"
	I0719 05:37:21.760572       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="89.647054ms"
	I0719 05:37:21.795725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.419919ms"
	I0719 05:37:21.795947       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.5µs"
	I0719 05:37:24.665622       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.047725ms"
	I0719 05:37:24.665742       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31µs"
	I0719 05:37:24.766270       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.397567ms"
	I0719 05:37:24.767646       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.1µs"
	I0719 05:41:15.493378       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-761300-m03\" does not exist"
	I0719 05:41:15.493458       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-761300-m02"
	I0719 05:41:15.540014       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-761300-m03" podCIDRs=["10.244.2.0/24"]
	I0719 05:41:20.136082       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-761300-m03"
	I0719 05:41:44.751887       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-761300-m02"
	I0719 05:49:35.274914       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-761300-m02"
	I0719 05:52:21.157311       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-761300-m02"
	I0719 05:52:27.712932       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-761300-m02"
	I0719 05:52:27.713884       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-761300-m03\" does not exist"
	I0719 05:52:27.748941       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-761300-m03" podCIDRs=["10.244.3.0/24"]
	I0719 05:52:44.350757       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-761300-m02"
	I0719 05:54:20.420462       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-761300-m02"
	I0719 05:54:40.557016       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.873353ms"
	I0719 05:54:40.557609       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.701µs"
	
	
	==> kube-controller-manager [aa4a741a5c9f] <==
	I0719 05:57:29.038078       1 shared_informer.go:320] Caches are synced for node
	I0719 05:57:29.038128       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0719 05:57:29.038466       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0719 05:57:29.039308       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0719 05:57:29.039434       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0719 05:57:29.052757       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0719 05:57:29.062028       1 shared_informer.go:320] Caches are synced for taint
	I0719 05:57:29.064747       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0719 05:57:29.081799       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0719 05:57:29.090139       1 shared_informer.go:320] Caches are synced for GC
	I0719 05:57:29.095058       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-761300"
	I0719 05:57:29.096679       1 shared_informer.go:320] Caches are synced for daemon sets
	I0719 05:57:29.096898       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-761300-m02"
	I0719 05:57:29.096990       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-761300-m03"
	I0719 05:57:29.097494       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0719 05:57:29.096812       1 shared_informer.go:320] Caches are synced for TTL
	I0719 05:57:29.500682       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 05:57:29.522952       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 05:57:29.522972       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0719 05:57:34.105903       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0719 05:57:49.864410       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.094324ms"
	I0719 05:57:49.864791       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.001µs"
	I0719 05:57:49.924329       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="158µs"
	I0719 05:57:50.951144       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.789786ms"
	I0719 05:57:50.951704       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="110.099µs"
	
	
	==> kube-proxy [2391c8e68ac5] <==
	I0719 05:57:19.036746       1 server_linux.go:69] "Using iptables proxy"
	I0719 05:57:19.103667       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.162.149"]
	I0719 05:57:19.241362       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 05:57:19.242484       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 05:57:19.243544       1 server_linux.go:165] "Using iptables Proxier"
	I0719 05:57:19.248639       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 05:57:19.249772       1 server.go:872] "Version info" version="v1.30.3"
	I0719 05:57:19.249828       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 05:57:19.252716       1 config.go:192] "Starting service config controller"
	I0719 05:57:19.253365       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 05:57:19.253803       1 config.go:319] "Starting node config controller"
	I0719 05:57:19.253888       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 05:57:19.256784       1 config.go:101] "Starting endpoint slice config controller"
	I0719 05:57:19.256896       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 05:57:19.354319       1 shared_informer.go:320] Caches are synced for node config
	I0719 05:57:19.354363       1 shared_informer.go:320] Caches are synced for service config
	I0719 05:57:19.360326       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [c7f3e45f7ac5] <==
	I0719 05:33:17.247310       1 server_linux.go:69] "Using iptables proxy"
	I0719 05:33:17.266745       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.28.162.16"]
	I0719 05:33:17.335859       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 05:33:17.336129       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 05:33:17.336392       1 server_linux.go:165] "Using iptables Proxier"
	I0719 05:33:17.340299       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 05:33:17.341598       1 server.go:872] "Version info" version="v1.30.3"
	I0719 05:33:17.341834       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 05:33:17.343550       1 config.go:192] "Starting service config controller"
	I0719 05:33:17.343610       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 05:33:17.343638       1 config.go:101] "Starting endpoint slice config controller"
	I0719 05:33:17.343771       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 05:33:17.345233       1 config.go:319] "Starting node config controller"
	I0719 05:33:17.345471       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 05:33:17.444786       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 05:33:17.444830       1 shared_informer.go:320] Caches are synced for service config
	I0719 05:33:17.449592       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [1e25c1f162f5] <==
	E0719 05:32:59.751059       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 05:32:59.776436       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 05:32:59.777003       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 05:32:59.839535       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 05:32:59.839645       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 05:32:59.877145       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 05:32:59.877192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 05:32:59.877377       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 05:32:59.877888       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 05:32:59.890177       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 05:32:59.890220       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 05:32:59.892022       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 05:32:59.894628       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 05:33:00.010258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 05:33:00.010397       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 05:33:00.033374       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 05:33:00.033622       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 05:33:00.069187       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 05:33:00.069640       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 05:33:00.091838       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 05:33:00.092390       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 05:33:00.099779       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 05:33:00.099822       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0719 05:33:01.680818       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0719 05:54:42.593580       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [aee640f43640] <==
	I0719 05:57:13.438062       1 serving.go:380] Generated self-signed cert in-memory
	W0719 05:57:15.544586       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 05:57:15.544630       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 05:57:15.545161       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 05:57:15.545279       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 05:57:15.620077       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 05:57:15.620215       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 05:57:15.623729       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 05:57:15.623823       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 05:57:15.624673       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 05:57:15.626360       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 05:57:15.723970       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 05:57:32 multinode-761300 kubelet[1623]: E0719 05:57:32.219892    1623 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4-config-volume podName:d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4 nodeName:}" failed. No retries permitted until 2024-07-19 05:57:48.219870272 +0000 UTC m=+37.873216771 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4-config-volume") pod "coredns-7db6d8ff4d-hw9kh" (UID: "d2b7511e-ea48-417f-a0c5-0cfd9d9c41b4") : object "kube-system"/"coredns" not registered
	Jul 19 05:57:32 multinode-761300 kubelet[1623]: E0719 05:57:32.320187    1623 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jul 19 05:57:32 multinode-761300 kubelet[1623]: E0719 05:57:32.320425    1623 projected.go:200] Error preparing data for projected volume kube-api-access-jfthb for pod default/busybox-fc5497c4f-n4tql: object "default"/"kube-root-ca.crt" not registered
	Jul 19 05:57:32 multinode-761300 kubelet[1623]: E0719 05:57:32.321042    1623 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8302851-41b4-4d49-90b7-7a98190dfa1d-kube-api-access-jfthb podName:f8302851-41b4-4d49-90b7-7a98190dfa1d nodeName:}" failed. No retries permitted until 2024-07-19 05:57:48.321021393 +0000 UTC m=+37.974367992 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-jfthb" (UniqueName: "kubernetes.io/projected/f8302851-41b4-4d49-90b7-7a98190dfa1d-kube-api-access-jfthb") pod "busybox-fc5497c4f-n4tql" (UID: "f8302851-41b4-4d49-90b7-7a98190dfa1d") : object "default"/"kube-root-ca.crt" not registered
	Jul 19 05:57:49 multinode-761300 kubelet[1623]: I0719 05:57:49.826076    1623 scope.go:117] "RemoveContainer" containerID="7992ac3e32925823720de898ee8c2183f41e4bfef365680ee1b1f35057ad05e9"
	Jul 19 05:57:49 multinode-761300 kubelet[1623]: I0719 05:57:49.826586    1623 scope.go:117] "RemoveContainer" containerID="520b9666040c4cabbb4b07a10b1fc8bdc3937905c11ff2bc10e2a11b7b77f315"
	Jul 19 05:57:49 multinode-761300 kubelet[1623]: E0719 05:57:49.826858    1623 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(87c864ea-0853-481c-ab24-2ab209760f69)\"" pod="kube-system/storage-provisioner" podUID="87c864ea-0853-481c-ab24-2ab209760f69"
	Jul 19 05:58:01 multinode-761300 kubelet[1623]: I0719 05:58:01.651008    1623 scope.go:117] "RemoveContainer" containerID="520b9666040c4cabbb4b07a10b1fc8bdc3937905c11ff2bc10e2a11b7b77f315"
	Jul 19 05:58:10 multinode-761300 kubelet[1623]: I0719 05:58:10.656949    1623 scope.go:117] "RemoveContainer" containerID="d8ebf4b1a3d905fc5f71f0c9b8d2f4349edfa22797f13b33f8958f70f2ef26aa"
	Jul 19 05:58:10 multinode-761300 kubelet[1623]: E0719 05:58:10.702455    1623 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 05:58:10 multinode-761300 kubelet[1623]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 05:58:10 multinode-761300 kubelet[1623]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 05:58:10 multinode-761300 kubelet[1623]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 05:58:10 multinode-761300 kubelet[1623]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 05:58:10 multinode-761300 kubelet[1623]: I0719 05:58:10.710966    1623 scope.go:117] "RemoveContainer" containerID="d59292a30318a5213e2331de93542af9e1416bb064c2e100af37f04d3fe39e42"
	Jul 19 05:59:10 multinode-761300 kubelet[1623]: E0719 05:59:10.697694    1623 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 05:59:10 multinode-761300 kubelet[1623]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 05:59:10 multinode-761300 kubelet[1623]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 05:59:10 multinode-761300 kubelet[1623]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 05:59:10 multinode-761300 kubelet[1623]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 06:00:10 multinode-761300 kubelet[1623]: E0719 06:00:10.697131    1623 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 06:00:10 multinode-761300 kubelet[1623]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 06:00:10 multinode-761300 kubelet[1623]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 06:00:10 multinode-761300 kubelet[1623]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 06:00:10 multinode-761300 kubelet[1623]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 06:00:06.438086    8796 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-761300 -n multinode-761300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-761300 -n multinode-761300: (12.760845s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-761300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (426.84s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (10800.442s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2600409833.exe start -p running-upgrade-359900 --memory=2200 --vm-driver=hyperv
E0719 06:20:10.183098    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2600409833.exe start -p running-upgrade-359900 --memory=2200 --vm-driver=hyperv: (8m26.8995171s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-359900 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
panic: test timed out after 3h0m0s
running tests:
	TestCertExpiration (4m37s)
	TestDockerFlags (2m14s)
	TestPause (5m22s)
	TestPause/serial (5m22s)
	TestPause/serial/Start (5m22s)
	TestRunningBinaryUpgrade (9m38s)
	TestStartStop (5m22s)

                                                
                                                
goroutine 1782 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 3 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000125ba0, 0xc0008fdbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc0006aa2e8, {0x51d80e0, 0x2a, 0x2a}, {0x2e30426?, 0xc680cf?, 0x51fb4e0?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0006808c0)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0006808c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 12 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc00068d380)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 1681 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0xbc7ec5?, {0xc001b8fb20?, 0x22cedc0?, 0xc001b8fb58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xbbfdf6?, 0x5288940?, 0xc001b8fbf8?, 0xbb29a5?, 0x22106180eb8?, 0x41?, 0xba8ba6?, 0x4?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x5e4, {0xc0013c013a?, 0x2c6, 0x0?}, 0xbb2be5?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001600a08?, {0xc0013c013a?, 0x400?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001600a08, {0xc0013c013a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000884830, {0xc0013c013a?, 0xc000584a80?, 0x13a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001406660, {0x3e0b180, 0xc00010aa38})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3e0b2c0, 0xc001406660}, {0x3e0b180, 0xc00010aa38}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001b8fe78?, {0x3e0b2c0, 0xc001406660})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001b8ff38?, {0x3e0b2c0?, 0xc001406660?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3e0b2c0, 0xc001406660}, {0x3e0b240, 0xc000884830}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000054ba0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 761
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 1778 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc000003080, 0xc001baa6c0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 761
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 41 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 29
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 60 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3e30200, 0xc000224180}, 0xc000993f50, 0xc000993f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3e30200, 0xc000224180}, 0x90?, 0xc000993f50, 0xc000993f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3e30200?, 0xc000224180?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000993fd0?, 0xd3e4a4?, 0xc0006c85a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 153
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 874 [IO wait, 159 minutes]:
internal/poll.runtime_pollWait(0x2214ba70e90, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000100408?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc000154520, 0xc001607bb0)
	/usr/local/go/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc000154508, 0x2e4, {0xc0004445a0?, 0x0?, 0x0?}, 0xc000100008?)
	/usr/local/go/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc000154508, 0xc001607d90)
	/usr/local/go/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc000154508)
	/usr/local/go/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc000638520)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000638520)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0007200f0, {0x3e232c0, 0xc000638520})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0007200f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00150d040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 871
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 61 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 60
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1761 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0xc00050ae00?, {0xc000987b20?, 0x0?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x0?, 0x0?, 0x6320363733342020?, 0x35313a6f672e696e?, 0x6570796822205d38?, 0x7669726420227672?, 0x6f6422202b207265?, 0x6f63202272656b63?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x65c, {0xc000565d81?, 0x27f, 0xc641df?}, 0xe6a01c9fe6a01c9f?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001a89908?, {0xc000565d81?, 0x2000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001a89908, {0xc000565d81, 0x27f, 0x27f})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000692208, {0xc000565d81?, 0xc000987d98?, 0xe16?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00080b0e0, {0x3e0b180, 0xc00010aa48})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3e0b2c0, 0xc00080b0e0}, {0x3e0b180, 0xc00010aa48}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3e0b2c0, 0xc00080b0e0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xbb0c56?, {0x3e0b2c0?, 0xc00080b0e0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3e0b2c0, 0xc00080b0e0}, {0x3e0b240, 0xc000692208}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0006d08f0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 762
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 1764 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc001876180, 0xc0000549c0)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1745
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 59 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0008488d0, 0x3b)
	/usr/local/go/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x28c7c60?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000829860)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008489c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00061e950, {0x3e0c5c0, 0xc0012ef470}, 0x1, 0xc000224180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00061e950, 0x3b9aca00, 0x0, 0x1, 0xc000224180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 153
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1740 [chan receive, 5 minutes]:
testing.(*testContext).waitParallel(0xc0000b98b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00092c680)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00092c680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00092c680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00092c680, 0xc00154e380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1737
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1760 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x6e6961746e6f6322?, {0xc001b93b20?, 0x79646165726e7520?, 0x3a73757461747320?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x507473616c222c22?, 0x656d695465626f72?, 0xc001b93bf8?, 0xbb283b?, 0x69546e6f69746973?, 0x323032223a22656d?, 0xba8ba6?, 0x32333a36333a3330?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x644, {0xc0013c09fe?, 0x202, 0xc641df?}, 0x6e6f632d76702d6b?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001a89408?, {0xc0013c09fe?, 0x400?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001a89408, {0xc0013c09fe, 0x202, 0x202})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0006921f0, {0xc0013c09fe?, 0xc001b93d98?, 0x6f?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00080a960, {0x3e0b180, 0xc000a34130})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3e0b2c0, 0xc00080a960}, {0x3e0b180, 0xc000a34130}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3e0b2c0, 0xc00080a960})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xbb0c56?, {0x3e0b2c0?, 0xc00080a960?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3e0b2c0, 0xc00080a960}, {0x3e0b240, 0xc0006921f0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 762
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 760 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc0000b98b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00162e1a0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00162e1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestCertOptions(0xc00162e1a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:36 +0x92
testing.tRunner(0xc00162e1a0, 0x38b4630)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 152 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000829980)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 96
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 153 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008489c0, 0xc000224180)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 96
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1721 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc0000b98b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00162fba0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00162fba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestKubernetesUpgrade(0xc00162fba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:215 +0x39
testing.tRunner(0xc00162fba0, 0x38b46d8)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1720 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc0000b98b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00162fa00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00162fa00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStoppedBinaryUpgrade(0xc00162fa00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:143 +0x86
testing.tRunner(0xc00162fa00, 0x38b4760)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1719 [syscall, locked to thread]:
syscall.SyscallN(0x7ffaaef74e10?, {0xc0008fd960?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x5d4, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0009e8600)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001876300)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc001876300)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc00162f860, 0xc001876300)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestRunningBinaryUpgrade(0xc00162f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/version_upgrade_test.go:130 +0x788
testing.tRunner(0xc00162f860, 0x38b4738)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 762 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0x7ffaaef74e10?, {0xc0008f9868?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x2a0, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0016fb2c0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001876480)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc001876480)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc00162e4e0, 0xc001876480)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestDockerFlags(0xc00162e4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:51 +0x489
testing.tRunner(0xc00162e4e0, 0x38b4640)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 761 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x7ffaaef74e10?, {0xc00134d9a8?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x300, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc000825bf0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc000003080)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc000003080)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc00162e340, 0xc000003080)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestCertExpiration(0xc00162e340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/cert_options_test.go:123 +0x2c5
testing.tRunner(0xc00162e340, 0x38b4628)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1794 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc001876480, 0xc000054b40)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 762
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 1737 [chan receive, 5 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00092c1a0, 0x38b4930)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 1717
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 764 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc0000b98b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00162e820)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00162e820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestForceSystemdEnv(0xc00162e820)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/docker_test.go:146 +0x92
testing.tRunner(0xc00162e820, 0x38b4668)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1742 [chan receive, 5 minutes]:
testing.(*testContext).waitParallel(0xc0000b98b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00092cd00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00092cd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00092cd00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00092cd00, 0xc00154e400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1737
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1739 [chan receive, 5 minutes]:
testing.(*testContext).waitParallel(0xc0000b98b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00092c4e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00092c4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00092c4e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00092c4e0, 0xc00154e340)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1737
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1743 [chan receive, 5 minutes]:
testing.(*testContext).waitParallel(0xc0000b98b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00092cea0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00092cea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00092cea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00092cea0, 0xc00154e480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1737
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1763 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0xbc7ec5?, {0xc0009dfb20?, 0x22cedc0?, 0xc0009dfb58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xbbfdf6?, 0x5288940?, 0xc0009dfbf8?, 0xbb29a5?, 0x22106180a28?, 0xc0000a6041?, 0x34280?, 0x85?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x5b8, {0xc00071a93a?, 0x2c6, 0x0?}, 0xc0014706c0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001472f08?, {0xc00071a93a?, 0x400?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001472f08, {0xc00071a93a, 0x2c6, 0x2c6})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a34220, {0xc00071a93a?, 0xc0018e6fc0?, 0x13a?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00080b0b0, {0x3e0b180, 0xc000884178})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3e0b2c0, 0xc00080b0b0}, {0x3e0b180, 0xc000884178}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc0009dfe78?, {0x3e0b2c0, 0xc00080b0b0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0009dff38?, {0x3e0b2c0?, 0xc00080b0b0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3e0b2c0, 0xc00080b0b0}, {0x3e0b240, 0xc000a34220}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0002247e0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1745
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 1762 [syscall, 3 minutes, locked to thread]:
syscall.SyscallN(0xbc7ec5?, {0xc001cc1b20?, 0x2306240?, 0xc001cc1b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xbbfdf6?, 0x5288940?, 0xc001cc1bf8?, 0xbb29a5?, 0x22106180108?, 0xc0000a604d?, 0x34280?, 0x85?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x620, {0xc0017b0228?, 0x5d8, 0xc641df?}, 0xc0000a6030?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001472288?, {0xc0017b0228?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001472288, {0xc0017b0228, 0x5d8, 0x5d8})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a34208, {0xc0017b0228?, 0x2214b591f48?, 0x227?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00080ac60, {0x3e0b180, 0xc00010a028})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3e0b2c0, 0xc00080ac60}, {0x3e0b180, 0xc00010a028}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001cc1e78?, {0x3e0b2c0, 0xc00080ac60})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc001cc1f38?, {0x3e0b2c0?, 0xc00080ac60?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3e0b2c0, 0xc00080ac60}, {0x3e0b240, 0xc000a34208}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000224f00?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1745
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 1745 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x7ffaaef74e10?, {0xc0009dda78?, 0x3?, 0x0?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x5b4, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0016facc0)
	/usr/local/go/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001876180)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc001876180)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc00092d380, 0xc001876180)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateFreshStart({0x3e30040, 0xc000858150}, 0xc00092d380, {0xc0015ce1e0, 0xc})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:80 +0x275
k8s.io/minikube/test/integration.TestPause.func1.1(0xc00092d380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:66 +0x43
testing.tRunner(0xc00092d380, 0xc00154e4c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1744
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1738 [chan receive, 5 minutes]:
testing.(*testContext).waitParallel(0xc0000b98b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00092c340)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00092c340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00092c340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00092c340, 0xc00154e100)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1737
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1601 [chan receive, 9 minutes]:
testing.(*testContext).waitParallel(0xc0000b98b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00162eb60)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00162eb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00162eb60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:47 +0x39
testing.tRunner(0xc00162eb60, 0x38b4710)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1717 [chan receive, 5 minutes]:
testing.(*T).Run(0xc00162f520, {0x2dd4334?, 0xcf73d3?}, 0x38b4930)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc00162f520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00162f520, 0x38b4758)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1779 [syscall, locked to thread]:
syscall.SyscallN(0xbc7ec5?, {0xc00135db20?, 0x22c44b0?, 0xc00135db58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xbbfdf6?, 0x5288940?, 0xc00135dbf8?, 0xbb29a5?, 0x22106180598?, 0x4d?, 0xba8ba6?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x624, {0xc00071c26f?, 0x591, 0xc641df?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001a88288?, {0xc00071c26f?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001a88288, {0xc00071c26f, 0x591, 0x591})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a34110, {0xc00071c26f?, 0x2214b6a2ee8?, 0x20c?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00080b8f0, {0x3e0b180, 0xc000692010})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3e0b2c0, 0xc00080b8f0}, {0x3e0b180, 0xc000692010}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3e0b2c0, 0xc00080b8f0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xbb0c56?, {0x3e0b2c0?, 0xc00080b8f0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3e0b2c0, 0xc00080b8f0}, {0x3e0b240, 0xc000a34110}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000054660?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1719
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 1680 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0xbc7ec5?, {0xc00135fb20?, 0x22cedc0?, 0xc00135fb58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xbbfdf6?, 0x5288940?, 0xc00135fbf8?, 0xbb29a5?, 0x22106180eb8?, 0xc00135fb4d?, 0xcf2d2f?, 0xc0012f0000?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x618, {0xc00054ba07?, 0x5f9, 0x0?}, 0xc00135fc04?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001600008?, {0xc00054ba07?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001600008, {0xc00054ba07, 0x5f9, 0x5f9})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000884818, {0xc00054ba07?, 0x2214b6a2ee8?, 0x207?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001406630, {0x3e0b180, 0xc000692008})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3e0b2c0, 0xc001406630}, {0x3e0b180, 0xc000692008}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x2dd23f0?, {0x3e0b2c0, 0xc001406630})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xcf8440?, {0x3e0b2c0?, 0xc001406630?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3e0b2c0, 0xc001406630}, {0x3e0b240, 0xc000884818}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0x38b4718?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 761
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 1744 [chan receive, 5 minutes]:
testing.(*T).Run(0xc00092d040, {0x2dd4339?, 0x24?}, 0xc00154e4c0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause.func1(0xc00092d040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:65 +0x1ee
testing.tRunner(0xc00092d040, 0xc00080a540)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1683
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1781 [select]:
os/exec.(*Cmd).watchCtx(0xc001876300, 0xc001baa180)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 1719
	/usr/local/go/src/os/exec/exec.go:754 +0x9e9

                                                
                                                
goroutine 1780 [syscall, locked to thread]:
syscall.SyscallN(0xbc7ec5?, {0xc001319b20?, 0x22c44b0?, 0xc001319b58?})
	/usr/local/go/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xbbfdf6?, 0x5288940?, 0xc001319bf8?, 0xbb29a5?, 0x22106180a28?, 0x77?, 0xba8ba6?, 0xc0002d54d0?)
	/usr/local/go/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x630, {0xc00087420d?, 0x1df3, 0xc641df?}, 0xc000367c20?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:442
syscall.Read(0xc001a88788?, {0xc00087420d?, 0x4000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc001a88788, {0xc00087420d, 0x1df3, 0x1df3})
	/usr/local/go/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000a34148, {0xc00087420d?, 0xc001319d98?, 0x1e35?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00080bad0, {0x3e0b180, 0xc00010aab8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3e0b2c0, 0xc00080bad0}, {0x3e0b180, 0xc00010aab8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3e0b2c0, 0xc00080bad0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xbb0c56?, {0x3e0b2c0?, 0xc00080bad0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x3e0b2c0, 0xc00080bad0}, {0x3e0b240, 0xc000a34148}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001920360?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 1719
	/usr/local/go/src/os/exec/exec.go:727 +0xa25

                                                
                                                
goroutine 1741 [chan receive, 5 minutes]:
testing.(*testContext).waitParallel(0xc0000b98b0)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00092c9c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00092c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00092c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00092c9c0, 0xc00154e3c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1737
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1683 [chan receive, 5 minutes]:
testing.(*T).Run(0xc00162eea0, {0x2dd5856?, 0xd18c2e2800?}, 0xc00080a540)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestPause(0xc00162eea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/pause_test.go:41 +0x159
testing.tRunner(0xc00162eea0, 0x38b4728)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (299.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-359900 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-359900 --driver=hyperv: exit status 1 (4m59.6022361s)

                                                
                                                
-- stdout --
	* [NoKubernetes-359900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-359900" primary control-plane node in "NoKubernetes-359900" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 06:16:57.362313    4368 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-359900 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-359900 -n NoKubernetes-359900
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-359900 -n NoKubernetes-359900: exit status 7 (293.5753ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 06:21:56.949830   12564 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-359900" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (299.90s)

                                                
                                    

Test pass (104/144)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 22.36
4 TestDownloadOnly/v1.20.0/preload-exists 0.08
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.35
9 TestDownloadOnly/v1.20.0/DeleteAll 1.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.28
12 TestDownloadOnly/v1.30.3/json-events 11.93
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.29
18 TestDownloadOnly/v1.30.3/DeleteAll 1.16
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 1.13
21 TestDownloadOnly/v1.31.0-beta.0/json-events 12.5
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.31
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 1.17
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 1.23
30 TestBinaryMirror 7.47
31 TestOffline 444.68
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.28
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.27
36 TestAddons/Setup 448.62
39 TestAddons/parallel/Ingress 68.36
40 TestAddons/parallel/InspektorGadget 27.83
41 TestAddons/parallel/MetricsServer 21.1
42 TestAddons/parallel/HelmTiller 38.53
44 TestAddons/parallel/CSI 93.99
45 TestAddons/parallel/Headlamp 34.55
46 TestAddons/parallel/CloudSpanner 21.52
47 TestAddons/parallel/LocalPath 86.74
48 TestAddons/parallel/NvidiaDevicePlugin 22.8
49 TestAddons/parallel/Yakd 6.02
50 TestAddons/parallel/Volcano 150.73
53 TestAddons/serial/GCPAuth/Namespaces 0.32
54 TestAddons/StoppedEnableDisable 59.35
58 TestForceSystemdFlag 256.84
66 TestErrorSpam/start 17.42
67 TestErrorSpam/status 37.62
68 TestErrorSpam/pause 23.35
69 TestErrorSpam/unpause 23.43
70 TestErrorSpam/stop 61.94
73 TestFunctional/serial/CopySyncFile 0.03
74 TestFunctional/serial/StartWithProxy 212.94
75 TestFunctional/serial/AuditLog 0
77 TestFunctional/serial/KubeContext 0.13
81 TestFunctional/serial/CacheCmd/cache/add_remote 348.29
82 TestFunctional/serial/CacheCmd/cache/add_local 60.78
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.26
84 TestFunctional/serial/CacheCmd/cache/list 0.26
87 TestFunctional/serial/CacheCmd/cache/delete 0.49
94 TestFunctional/delete_echo-server_images 0.02
95 TestFunctional/delete_my-image_image 0.01
96 TestFunctional/delete_minikube_cached_images 0.02
100 TestMultiControlPlane/serial/StartCluster 744.24
101 TestMultiControlPlane/serial/DeployApp 12.5
103 TestMultiControlPlane/serial/AddWorkerNode 273.02
104 TestMultiControlPlane/serial/NodeLabels 0.2
105 TestMultiControlPlane/serial/HAppyAfterClusterStart 29.9
106 TestMultiControlPlane/serial/CopyFile 657.44
110 TestImageBuild/serial/Setup 204.04
111 TestImageBuild/serial/NormalBuild 10.18
112 TestImageBuild/serial/BuildWithBuildArg 9.1
113 TestImageBuild/serial/BuildWithDockerIgnore 7.96
114 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.74
118 TestJSONOutput/start/Command 247.54
119 TestJSONOutput/start/Audit 0
121 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
122 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
124 TestJSONOutput/pause/Command 8.09
125 TestJSONOutput/pause/Audit 0
127 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
128 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
130 TestJSONOutput/unpause/Command 7.94
131 TestJSONOutput/unpause/Audit 0
133 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
134 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
136 TestJSONOutput/stop/Command 35.14
137 TestJSONOutput/stop/Audit 0
139 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
140 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
141 TestErrorJSONOutput 1.41
146 TestMainNoArgs 0.26
147 TestMinikubeProfile 528.88
150 TestMountStart/serial/StartWithMountFirst 159.4
151 TestMountStart/serial/VerifyMountFirst 9.9
152 TestMountStart/serial/StartWithMountSecond 158.54
153 TestMountStart/serial/VerifyMountSecond 9.76
154 TestMountStart/serial/DeleteFirst 31.49
155 TestMountStart/serial/VerifyMountPostDelete 9.61
156 TestMountStart/serial/Stop 27.04
157 TestMountStart/serial/RestartStopped 121.04
158 TestMountStart/serial/VerifyMountPostStop 9.84
161 TestMultiNode/serial/FreshStart2Nodes 448.83
162 TestMultiNode/serial/DeployApp2Nodes 9.13
164 TestMultiNode/serial/AddNode 234.9
165 TestMultiNode/serial/MultiNodeLabels 0.18
166 TestMultiNode/serial/ProfileList 11.77
167 TestMultiNode/serial/CopyFile 372.16
168 TestMultiNode/serial/StopNode 78.66
169 TestMultiNode/serial/StartAfterStop 198.72
174 TestPreload 529.23
175 TestScheduledStopWindows 333.65
185 TestNoKubernetes/serial/StartNoK8sWithVersion 0.43
x
+
TestDownloadOnly/v1.20.0/json-events (22.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-907700 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-907700 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (22.3572407s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (22.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-907700
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-907700: exit status 85 (351.1907ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-907700 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:26 UTC |          |
	|         | -p download-only-907700        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 03:26:35
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 03:26:35.190930    1160 out.go:291] Setting OutFile to fd 620 ...
	I0719 03:26:35.190930    1160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:26:35.190930    1160 out.go:304] Setting ErrFile to fd 624...
	I0719 03:26:35.190930    1160 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0719 03:26:35.202931    1160 root.go:314] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0719 03:26:35.213944    1160 out.go:298] Setting JSON to true
	I0719 03:26:35.216932    1160 start.go:129] hostinfo: {"hostname":"minikube6","uptime":18621,"bootTime":1721340973,"procs":184,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0719 03:26:35.217943    1160 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 03:26:35.234931    1160 out.go:97] [download-only-907700] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 03:26:35.235978    1160 notify.go:220] Checking for updates...
	W0719 03:26:35.235978    1160 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0719 03:26:35.239073    1160 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 03:26:35.242224    1160 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0719 03:26:35.244751    1160 out.go:169] MINIKUBE_LOCATION=19302
	I0719 03:26:35.247296    1160 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0719 03:26:35.253028    1160 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 03:26:35.253883    1160 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:26:40.664117    1160 out.go:97] Using the hyperv driver based on user configuration
	I0719 03:26:40.664231    1160 start.go:297] selected driver: hyperv
	I0719 03:26:40.664231    1160 start.go:901] validating driver "hyperv" against <nil>
	I0719 03:26:40.664672    1160 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 03:26:40.716732    1160 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0719 03:26:40.718059    1160 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 03:26:40.718059    1160 cni.go:84] Creating CNI manager for ""
	I0719 03:26:40.718222    1160 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 03:26:40.718335    1160 start.go:340] cluster config:
	{Name:download-only-907700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-907700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:26:40.719153    1160 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 03:26:40.723284    1160 out.go:97] Downloading VM boot image ...
	I0719 03:26:40.723473    1160 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.1-1721324531-19298-amd64.iso
	I0719 03:26:46.929351    1160 out.go:97] Starting "download-only-907700" primary control-plane node in "download-only-907700" cluster
	I0719 03:26:46.929351    1160 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 03:26:46.987434    1160 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0719 03:26:46.987969    1160 cache.go:56] Caching tarball of preloaded images
	I0719 03:26:46.988537    1160 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 03:26:46.991978    1160 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0719 03:26:46.991978    1160 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0719 03:26:47.117926    1160 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0719 03:26:51.717596    1160 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0719 03:26:51.719192    1160 preload.go:254] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0719 03:26:52.721318    1160 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0719 03:26:52.722566    1160 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-907700\config.json ...
	I0719 03:26:52.723489    1160 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-907700\config.json: {Name:mk7064bdd77ae4fc46e97c38311941a8a4d962e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 03:26:52.724838    1160 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0719 03:26:52.726291    1160 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-907700 host does not exist
	  To start a cluster, run: "minikube start -p download-only-907700"

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:26:57.589129    3820 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1852842s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-907700
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-907700: (1.2790782s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (11.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-217100 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-217100 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperv: (11.9330054s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (11.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-217100
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-217100: exit status 85 (288.3883ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-907700 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:26 UTC |                     |
	|         | -p download-only-907700        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:26 UTC | 19 Jul 24 03:26 UTC |
	| delete  | -p download-only-907700        | download-only-907700 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:26 UTC | 19 Jul 24 03:27 UTC |
	| start   | -o=json --download-only        | download-only-217100 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC |                     |
	|         | -p download-only-217100        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 03:27:00
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 03:27:00.461204    8452 out.go:291] Setting OutFile to fd 720 ...
	I0719 03:27:00.461681    8452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:27:00.461681    8452 out.go:304] Setting ErrFile to fd 624...
	I0719 03:27:00.461681    8452 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:27:00.485169    8452 out.go:298] Setting JSON to true
	I0719 03:27:00.488479    8452 start.go:129] hostinfo: {"hostname":"minikube6","uptime":18646,"bootTime":1721340973,"procs":184,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0719 03:27:00.488479    8452 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 03:27:00.494845    8452 out.go:97] [download-only-217100] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 03:27:00.494845    8452 notify.go:220] Checking for updates...
	I0719 03:27:00.498046    8452 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 03:27:00.501189    8452 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0719 03:27:00.504347    8452 out.go:169] MINIKUBE_LOCATION=19302
	I0719 03:27:00.506820    8452 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0719 03:27:00.511940    8452 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 03:27:00.513212    8452 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:27:06.019331    8452 out.go:97] Using the hyperv driver based on user configuration
	I0719 03:27:06.019852    8452 start.go:297] selected driver: hyperv
	I0719 03:27:06.019852    8452 start.go:901] validating driver "hyperv" against <nil>
	I0719 03:27:06.020024    8452 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 03:27:06.069322    8452 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0719 03:27:06.070944    8452 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 03:27:06.070944    8452 cni.go:84] Creating CNI manager for ""
	I0719 03:27:06.070944    8452 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 03:27:06.070944    8452 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 03:27:06.070944    8452 start.go:340] cluster config:
	{Name:download-only-217100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-217100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0719 03:27:06.071510    8452 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 03:27:06.075057    8452 out.go:97] Starting "download-only-217100" primary control-plane node in "download-only-217100" cluster
	I0719 03:27:06.075057    8452 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 03:27:06.134577    8452 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 03:27:06.134577    8452 cache.go:56] Caching tarball of preloaded images
	I0719 03:27:06.135076    8452 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0719 03:27:06.160905    8452 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0719 03:27:06.160905    8452 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0719 03:27:06.282682    8452 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4?checksum=md5:6304692df2fe6f7b0bdd7f93d160be8c -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0719 03:27:09.965240    8452 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0719 03:27:09.966114    8452 preload.go:254] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-217100 host does not exist
	  To start a cluster, run: "minikube start -p download-only-217100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:27:12.332313    3980 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (1.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1601017s)
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (1.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (1.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-217100
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-217100: (1.12881s)
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (1.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (12.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-641000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-641000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperv: (12.497529s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (12.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-641000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-641000: exit status 85 (303.7917ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-907700 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:26 UTC |                     |
	|         | -p download-only-907700             |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |                   |         |                     |                     |
	|         | --container-runtime=docker          |                      |                   |         |                     |                     |
	|         | --driver=hyperv                     |                      |                   |         |                     |                     |
	| delete  | --all                               | minikube             | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:26 UTC | 19 Jul 24 03:26 UTC |
	| delete  | -p download-only-907700             | download-only-907700 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:26 UTC | 19 Jul 24 03:27 UTC |
	| start   | -o=json --download-only             | download-only-217100 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC |                     |
	|         | -p download-only-217100             |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |                   |         |                     |                     |
	|         | --container-runtime=docker          |                      |                   |         |                     |                     |
	|         | --driver=hyperv                     |                      |                   |         |                     |                     |
	| delete  | --all                               | minikube             | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC | 19 Jul 24 03:27 UTC |
	| delete  | -p download-only-217100             | download-only-217100 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC | 19 Jul 24 03:27 UTC |
	| start   | -o=json --download-only             | download-only-641000 | minikube6\jenkins | v1.33.1 | 19 Jul 24 03:27 UTC |                     |
	|         | -p download-only-641000             |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr           |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |                   |         |                     |                     |
	|         | --container-runtime=docker          |                      |                   |         |                     |                     |
	|         | --driver=hyperv                     |                      |                   |         |                     |                     |
	|---------|-------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/19 03:27:14
	Running on machine: minikube6
	Binary: Built with gc go1.22.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 03:27:14.972749    5396 out.go:291] Setting OutFile to fd 832 ...
	I0719 03:27:14.973523    5396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:27:14.973523    5396 out.go:304] Setting ErrFile to fd 836...
	I0719 03:27:14.973523    5396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 03:27:14.996113    5396 out.go:298] Setting JSON to true
	I0719 03:27:14.999851    5396 start.go:129] hostinfo: {"hostname":"minikube6","uptime":18661,"bootTime":1721340973,"procs":183,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4651 Build 19045.4651","kernelVersion":"10.0.19045.4651 Build 19045.4651","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0719 03:27:14.999851    5396 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0719 03:27:15.006684    5396 out.go:97] [download-only-641000] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	I0719 03:27:15.006798    5396 notify.go:220] Checking for updates...
	I0719 03:27:15.009030    5396 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0719 03:27:15.012549    5396 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0719 03:27:15.015207    5396 out.go:169] MINIKUBE_LOCATION=19302
	I0719 03:27:15.018495    5396 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0719 03:27:15.023940    5396 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 03:27:15.025165    5396 driver.go:392] Setting default libvirt URI to qemu:///system
	I0719 03:27:20.445670    5396 out.go:97] Using the hyperv driver based on user configuration
	I0719 03:27:20.445805    5396 start.go:297] selected driver: hyperv
	I0719 03:27:20.445805    5396 start.go:901] validating driver "hyperv" against <nil>
	I0719 03:27:20.445870    5396 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0719 03:27:20.494333    5396 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0719 03:27:20.495111    5396 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 03:27:20.495111    5396 cni.go:84] Creating CNI manager for ""
	I0719 03:27:20.495111    5396 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 03:27:20.495111    5396 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 03:27:20.495648    5396 start.go:340] cluster config:
	{Name:download-only-641000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-641000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInt
erval:1m0s}
	I0719 03:27:20.495818    5396 iso.go:125] acquiring lock: {Name:mkf36ab82752c372a68f02aa77a649a4232946ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 03:27:20.499065    5396 out.go:97] Starting "download-only-641000" primary control-plane node in "download-only-641000" cluster
	I0719 03:27:20.499217    5396 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 03:27:20.556533    5396 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0719 03:27:20.557225    5396 cache.go:56] Caching tarball of preloaded images
	I0719 03:27:20.557533    5396 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0719 03:27:20.560771    5396 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0719 03:27:20.560910    5396 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0719 03:27:20.682813    5396 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:181d3c061f7abe363e688bf9ac3c9580 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0719 03:27:24.860430    5396 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0719 03:27:24.861024    5396 preload.go:254] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-641000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-641000"

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:27:27.417299    3324 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (1.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1719271s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (1.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (1.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-641000
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-641000: (1.2259801s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (1.23s)

                                                
                                    
x
+
TestBinaryMirror (7.47s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-056600 --alsologtostderr --binary-mirror http://127.0.0.1:58266 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-056600 --alsologtostderr --binary-mirror http://127.0.0.1:58266 --driver=hyperv: (6.6282925s)
helpers_test.go:175: Cleaning up "binary-mirror-056600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-056600
--- PASS: TestBinaryMirror (7.47s)

                                                
                                    
x
+
TestOffline (444.68s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-359900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-359900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (6m38.2806786s)
helpers_test.go:175: Cleaning up "offline-docker-359900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-359900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-359900: (46.4019979s)
--- PASS: TestOffline (444.68s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.28s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-811100
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-811100: exit status 85 (281.0132ms)

                                                
                                                
-- stdout --
	* Profile "addons-811100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-811100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:27:41.246011    6840 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.28s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.27s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-811100
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-811100: exit status 85 (265.1685ms)

                                                
                                                
-- stdout --
	* Profile "addons-811100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-811100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 03:27:41.246011   12124 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.27s)

                                                
                                    
x
+
TestAddons/Setup (448.62s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-811100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-811100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (7m28.6192416s)
--- PASS: TestAddons/Setup (448.62s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (68.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-811100 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-811100 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-811100 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4a6ab348-282b-42b4-96a5-cfc41f2742ce] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4a6ab348-282b-42b4-96a5-cfc41f2742ce] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.0073502s
addons_test.go:264: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-811100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Done: out/minikube-windows-amd64.exe -p addons-811100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.6722031s)
addons_test.go:271: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-811100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0719 03:37:22.659880    7400 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:288: (dbg) Run:  kubectl --context addons-811100 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-811100 ip
addons_test.go:293: (dbg) Done: out/minikube-windows-amd64.exe -p addons-811100 ip: (2.5937947s)
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 172.28.164.220
addons_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-811100 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-windows-amd64.exe -p addons-811100 addons disable ingress-dns --alsologtostderr -v=1: (16.7858888s)
addons_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-811100 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe -p addons-811100 addons disable ingress --alsologtostderr -v=1: (22.1941107s)
--- PASS: TestAddons/parallel/Ingress (68.36s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (27.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bkjvc" [66f2dc77-d2c6-463c-a58e-1acea2e54569] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0134628s
addons_test.go:843: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-811100
addons_test.go:843: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-811100: (22.8112386s)
--- PASS: TestAddons/parallel/InspektorGadget (27.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (21.1s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.4816ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-k5xqw" [2ffe89c5-d971-4af7-8ab4-31b32d653271] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0071974s
addons_test.go:417: (dbg) Run:  kubectl --context addons-811100 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-811100 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:434: (dbg) Done: out/minikube-windows-amd64.exe -p addons-811100 addons disable metrics-server --alsologtostderr -v=1: (15.9010388s)
--- PASS: TestAddons/parallel/MetricsServer (21.10s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (38.53s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 3.9914ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-9m5qz" [eeffa01d-176d-4218-b5d5-db629ef2b701] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0198527s
addons_test.go:475: (dbg) Run:  kubectl --context addons-811100 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-811100 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (17.429053s)
addons_test.go:492: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-811100 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:492: (dbg) Done: out/minikube-windows-amd64.exe -p addons-811100 addons disable helm-tiller --alsologtostderr -v=1: (16.066601s)
--- PASS: TestAddons/parallel/HelmTiller (38.53s)

                                                
                                    
x
+
TestAddons/parallel/CSI (93.99s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 10.134ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-811100 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-811100 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d2cc4033-f7f4-4f53-8528-56c8dd67d9b5] Pending
helpers_test.go:344: "task-pv-pod" [d2cc4033-f7f4-4f53-8528-56c8dd67d9b5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d2cc4033-f7f4-4f53-8528-56c8dd67d9b5] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 21.0116522s
addons_test.go:586: (dbg) Run:  kubectl --context addons-811100 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-811100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-811100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-811100 delete pod task-pv-pod
addons_test.go:596: (dbg) Done: kubectl --context addons-811100 delete pod task-pv-pod: (1.4603451s)
addons_test.go:602: (dbg) Run:  kubectl --context addons-811100 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-811100 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-811100 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1cec236d-43ec-4b27-841f-1bf0cc153a5d] Pending
helpers_test.go:344: "task-pv-pod-restore" [1cec236d-43ec-4b27-841f-1bf0cc153a5d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1cec236d-43ec-4b27-841f-1bf0cc153a5d] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0167039s
addons_test.go:628: (dbg) Run:  kubectl --context addons-811100 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Done: kubectl --context addons-811100 delete pod task-pv-pod-restore: (1.5816728s)
addons_test.go:632: (dbg) Run:  kubectl --context addons-811100 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-811100 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-811100 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-windows-amd64.exe -p addons-811100 addons disable csi-hostpath-driver --alsologtostderr -v=1: (23.8394661s)
addons_test.go:644: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-811100 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-windows-amd64.exe -p addons-811100 addons disable volumesnapshots --alsologtostderr -v=1: (15.8166965s)
--- PASS: TestAddons/parallel/CSI (93.99s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (34.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-811100 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-811100 --alsologtostderr -v=1: (16.5213851s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-5svjh" [eb5b56fc-7118-470c-9776-375ff318493d] Pending
helpers_test.go:344: "headlamp-7867546754-5svjh" [eb5b56fc-7118-470c-9776-375ff318493d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-5svjh" [eb5b56fc-7118-470c-9776-375ff318493d] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 18.0199973s
--- PASS: TestAddons/parallel/Headlamp (34.55s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (21.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-vp5bx" [2a1f967e-2fed-4a63-8f44-8ad25eff7b86] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0211954s
addons_test.go:862: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-811100
addons_test.go:862: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-811100: (16.4870094s)
--- PASS: TestAddons/parallel/CloudSpanner (21.52s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (86.74s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-811100 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-811100 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-811100 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [992bc80c-aa7c-4f2c-bf67-15a98f940941] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [992bc80c-aa7c-4f2c-bf67-15a98f940941] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [992bc80c-aa7c-4f2c-bf67-15a98f940941] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.0071978s
addons_test.go:992: (dbg) Run:  kubectl --context addons-811100 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-811100 ssh "cat /opt/local-path-provisioner/pvc-114f4030-a1d1-4247-ab71-0d8af834e357_default_test-pvc/file1"
addons_test.go:1001: (dbg) Done: out/minikube-windows-amd64.exe -p addons-811100 ssh "cat /opt/local-path-provisioner/pvc-114f4030-a1d1-4247-ab71-0d8af834e357_default_test-pvc/file1": (10.3178887s)
addons_test.go:1013: (dbg) Run:  kubectl --context addons-811100 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-811100 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-811100 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-windows-amd64.exe -p addons-811100 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (1m2.6239673s)
--- PASS: TestAddons/parallel/LocalPath (86.74s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (22.8s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-s468j" [3a6ddfdd-7b14-41e2-9d1d-4f7227d5cbe1] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0179814s
addons_test.go:1056: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-811100
addons_test.go:1056: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-811100: (16.7731783s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (22.80s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-6rvvr" [5a373c3c-46d9-4fc3-abff-258d18fd7fb9] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0122959s
--- PASS: TestAddons/parallel/Yakd (6.02s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (150.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:905: volcano-controller stabilized in 28.1193ms
addons_test.go:897: volcano-admission stabilized in 29.0292ms
addons_test.go:889: volcano-scheduler stabilized in 29.2908ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-csf64" [4ef18a99-2b33-4f5e-91e7-d2dc96f862d2] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.0180831s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-mzdsg" [b08205f7-e59b-4ea3-a37b-1e7e85812711] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.0135062s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-kd8dh" [a0fa7c0e-fffa-477a-9c13-aa0d26e7624b] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 6.0176415s
addons_test.go:924: (dbg) Run:  kubectl --context addons-811100 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-811100 create -f testdata\vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-811100 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [ebe6e7e0-05e5-4a59-a5a7-6171c9dcbc23] Pending
helpers_test.go:344: "test-job-nginx-0" [ebe6e7e0-05e5-4a59-a5a7-6171c9dcbc23] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:344: "test-job-nginx-0" [ebe6e7e0-05e5-4a59-a5a7-6171c9dcbc23] Pending: PodScheduled:Unschedulable (Pod my-volcano/test-job-nginx-0 can possibly be assigned to addons-811100, once resource is released)
helpers_test.go:344: "test-job-nginx-0" [ebe6e7e0-05e5-4a59-a5a7-6171c9dcbc23] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [ebe6e7e0-05e5-4a59-a5a7-6171c9dcbc23] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 1m47.0073613s
addons_test.go:960: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-811100 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-windows-amd64.exe -p addons-811100 addons disable volcano --alsologtostderr -v=1: (26.5607818s)
--- PASS: TestAddons/parallel/Volcano (150.73s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-811100 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-811100 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.32s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (59.35s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-811100
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-811100: (46.5829516s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-811100
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-811100: (4.9755279s)
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-811100
addons_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-811100: (4.94863s)
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-811100
addons_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-811100: (2.8413172s)
--- PASS: TestAddons/StoppedEnableDisable (59.35s)

                                                
                                    
x
+
TestForceSystemdFlag (256.84s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-359900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-359900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (3m25.5003683s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-359900 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-359900 ssh "docker info --format {{.CgroupDriver}}": (10.6166458s)
helpers_test.go:175: Cleaning up "force-systemd-flag-359900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-359900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-359900: (40.7272627s)
--- PASS: TestForceSystemdFlag (256.84s)

                                                
                                    
x
+
TestErrorSpam/start (17.42s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 start --dry-run: (5.8029847s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 start --dry-run: (5.817002s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 start --dry-run: (5.7934208s)
--- PASS: TestErrorSpam/start (17.42s)

                                                
                                    
x
+
TestErrorSpam/status (37.62s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 status: (12.8612693s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 status: (12.4130131s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 status: (12.3430237s)
--- PASS: TestErrorSpam/status (37.62s)

                                                
                                    
x
+
TestErrorSpam/pause (23.35s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 pause: (8.0701656s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 pause: (7.591406s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 pause
E0719 03:45:10.073695    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
E0719 03:45:10.088837    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
E0719 03:45:10.104497    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
E0719 03:45:10.135857    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
E0719 03:45:10.183806    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
E0719 03:45:10.277694    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
E0719 03:45:10.452691    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
E0719 03:45:10.785394    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
E0719 03:45:11.435449    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
E0719 03:45:12.722290    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 pause: (7.6830436s)
--- PASS: TestErrorSpam/pause (23.35s)

                                                
                                    
x
+
TestErrorSpam/unpause (23.43s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 unpause
E0719 03:45:15.285660    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
E0719 03:45:20.414480    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 unpause: (7.8577032s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 unpause: (7.8137515s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 unpause
E0719 03:45:30.660262    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 unpause: (7.7542797s)
--- PASS: TestErrorSpam/unpause (23.43s)

                                                
                                    
x
+
TestErrorSpam/stop (61.94s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 stop
E0719 03:45:51.156631    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 stop: (39.1749493s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 stop: (11.619786s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 stop
E0719 03:46:32.117608    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-907600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-907600 stop: (11.1343975s)
--- PASS: TestErrorSpam/stop (61.94s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\9604\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (212.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-149600 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0719 03:47:54.052104    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
E0719 03:50:10.083483    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-149600 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m32.9304427s)
--- PASS: TestFunctional/serial/StartWithProxy (212.94s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (348.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-149600 cache add registry.k8s.io/pause:3.1
E0719 04:00:10.089448    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-149600 cache add registry.k8s.io/pause:3.1: (1m47.3064909s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-149600 cache add registry.k8s.io/pause:3.3
E0719 04:01:33.283505    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-149600 cache add registry.k8s.io/pause:3.3: (2m0.5030356s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-149600 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-149600 cache add registry.k8s.io/pause:latest: (2m0.4828946s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (348.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (60.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-149600 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1909033700\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-149600 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1909033700\001: (2.3457558s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-149600 cache add minikube-local-cache-test:functional-149600
E0719 04:05:10.081804    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-149600 cache add minikube-local-cache-test:functional-149600: (57.9643644s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-149600 cache delete minikube-local-cache-test:functional-149600
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-149600
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (60.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.49s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Non-zero exit: docker rmi -f docker.io/kicbase/echo-server:1.0: context deadline exceeded (0s)
functional_test.go:191: failed to remove image "docker.io/kicbase/echo-server:1.0" from docker images. args "docker rmi -f docker.io/kicbase/echo-server:1.0": context deadline exceeded
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-149600
functional_test.go:189: (dbg) Non-zero exit: docker rmi -f docker.io/kicbase/echo-server:functional-149600: context deadline exceeded (0s)
functional_test.go:191: failed to remove image "docker.io/kicbase/echo-server:functional-149600" from docker images. args "docker rmi -f docker.io/kicbase/echo-server:functional-149600": context deadline exceeded
--- PASS: TestFunctional/delete_echo-server_images (0.02s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-149600
functional_test.go:197: (dbg) Non-zero exit: docker rmi -f localhost/my-image:functional-149600: context deadline exceeded (0s)
functional_test.go:199: failed to remove image my-image from docker images. args "docker rmi -f localhost/my-image:functional-149600": context deadline exceeded
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-149600
functional_test.go:205: (dbg) Non-zero exit: docker rmi -f minikube-local-cache-test:functional-149600: context deadline exceeded (0s)
functional_test.go:207: failed to remove image minikube local cache test images from docker. args "docker rmi -f minikube-local-cache-test:functional-149600": context deadline exceeded
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (744.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-062500 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0719 04:30:10.111149    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
E0719 04:34:53.318496    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
E0719 04:35:10.106962    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
E0719 04:40:10.105837    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-062500 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m46.2277675s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 status -v=7 --alsologtostderr: (38.0082791s)
--- PASS: TestMultiControlPlane/serial/StartCluster (744.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (12.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062500 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062500 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-062500 -- rollout status deployment/busybox: (3.6253539s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062500 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062500 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062500 -- exec busybox-fc5497c4f-drzm5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-062500 -- exec busybox-fc5497c4f-drzm5 -- nslookup kubernetes.io: (1.7730084s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062500 -- exec busybox-fc5497c4f-njwwk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-062500 -- exec busybox-fc5497c4f-njwwk -- nslookup kubernetes.io: (1.6402997s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062500 -- exec busybox-fc5497c4f-nkb7m -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062500 -- exec busybox-fc5497c4f-drzm5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062500 -- exec busybox-fc5497c4f-njwwk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062500 -- exec busybox-fc5497c4f-nkb7m -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062500 -- exec busybox-fc5497c4f-drzm5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062500 -- exec busybox-fc5497c4f-njwwk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-062500 -- exec busybox-fc5497c4f-nkb7m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (12.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (273.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-062500 -v=7 --alsologtostderr
E0719 04:45:10.111161    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-062500 -v=7 --alsologtostderr: (3m42.6064601s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 status -v=7 --alsologtostderr: (50.4165113s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (273.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-062500 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (29.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (29.8960582s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (29.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (657.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 status --output json -v=7 --alsologtostderr: (50.9440232s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 cp testdata\cp-test.txt ha-062500:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 cp testdata\cp-test.txt ha-062500:/home/docker/cp-test.txt: (10.0150735s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500 "sudo cat /home/docker/cp-test.txt": (9.9984066s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3837681421\001\cp-test_ha-062500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3837681421\001\cp-test_ha-062500.txt: (9.8977108s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500 "sudo cat /home/docker/cp-test.txt": (10.0256921s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500:/home/docker/cp-test.txt ha-062500-m02:/home/docker/cp-test_ha-062500_ha-062500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500:/home/docker/cp-test.txt ha-062500-m02:/home/docker/cp-test_ha-062500_ha-062500-m02.txt: (17.2443046s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500 "sudo cat /home/docker/cp-test.txt": (10.0067018s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m02 "sudo cat /home/docker/cp-test_ha-062500_ha-062500-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m02 "sudo cat /home/docker/cp-test_ha-062500_ha-062500-m02.txt": (9.940103s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500:/home/docker/cp-test.txt ha-062500-m03:/home/docker/cp-test_ha-062500_ha-062500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500:/home/docker/cp-test.txt ha-062500-m03:/home/docker/cp-test_ha-062500_ha-062500-m03.txt: (17.2369264s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500 "sudo cat /home/docker/cp-test.txt": (9.9500686s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m03 "sudo cat /home/docker/cp-test_ha-062500_ha-062500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m03 "sudo cat /home/docker/cp-test_ha-062500_ha-062500-m03.txt": (9.8062719s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500:/home/docker/cp-test.txt ha-062500-m04:/home/docker/cp-test_ha-062500_ha-062500-m04.txt
E0719 04:50:10.119477    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500:/home/docker/cp-test.txt ha-062500-m04:/home/docker/cp-test_ha-062500_ha-062500-m04.txt: (17.3291624s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500 "sudo cat /home/docker/cp-test.txt": (10.0011374s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m04 "sudo cat /home/docker/cp-test_ha-062500_ha-062500-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m04 "sudo cat /home/docker/cp-test_ha-062500_ha-062500-m04.txt": (9.9378184s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 cp testdata\cp-test.txt ha-062500-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 cp testdata\cp-test.txt ha-062500-m02:/home/docker/cp-test.txt: (9.9761243s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m02 "sudo cat /home/docker/cp-test.txt": (9.8237086s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3837681421\001\cp-test_ha-062500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3837681421\001\cp-test_ha-062500-m02.txt: (9.8784258s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m02 "sudo cat /home/docker/cp-test.txt": (9.9539746s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m02:/home/docker/cp-test.txt ha-062500:/home/docker/cp-test_ha-062500-m02_ha-062500.txt
E0719 04:51:33.342061    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m02:/home/docker/cp-test.txt ha-062500:/home/docker/cp-test_ha-062500-m02_ha-062500.txt: (17.3392043s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m02 "sudo cat /home/docker/cp-test.txt": (10.1299228s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500 "sudo cat /home/docker/cp-test_ha-062500-m02_ha-062500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500 "sudo cat /home/docker/cp-test_ha-062500-m02_ha-062500.txt": (10.0675061s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m02:/home/docker/cp-test.txt ha-062500-m03:/home/docker/cp-test_ha-062500-m02_ha-062500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m02:/home/docker/cp-test.txt ha-062500-m03:/home/docker/cp-test_ha-062500-m02_ha-062500-m03.txt: (17.2588334s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m02 "sudo cat /home/docker/cp-test.txt": (9.9390654s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m03 "sudo cat /home/docker/cp-test_ha-062500-m02_ha-062500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m03 "sudo cat /home/docker/cp-test_ha-062500-m02_ha-062500-m03.txt": (10.0733548s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m02:/home/docker/cp-test.txt ha-062500-m04:/home/docker/cp-test_ha-062500-m02_ha-062500-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m02:/home/docker/cp-test.txt ha-062500-m04:/home/docker/cp-test_ha-062500-m02_ha-062500-m04.txt: (17.4587761s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m02 "sudo cat /home/docker/cp-test.txt": (9.9327146s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m04 "sudo cat /home/docker/cp-test_ha-062500-m02_ha-062500-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m04 "sudo cat /home/docker/cp-test_ha-062500-m02_ha-062500-m04.txt": (9.9990809s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 cp testdata\cp-test.txt ha-062500-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 cp testdata\cp-test.txt ha-062500-m03:/home/docker/cp-test.txt: (9.950954s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m03 "sudo cat /home/docker/cp-test.txt": (9.8761775s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3837681421\001\cp-test_ha-062500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3837681421\001\cp-test_ha-062500-m03.txt: (9.9207286s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m03 "sudo cat /home/docker/cp-test.txt": (9.8931244s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m03:/home/docker/cp-test.txt ha-062500:/home/docker/cp-test_ha-062500-m03_ha-062500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m03:/home/docker/cp-test.txt ha-062500:/home/docker/cp-test_ha-062500-m03_ha-062500.txt: (17.2658765s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m03 "sudo cat /home/docker/cp-test.txt": (9.8904471s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500 "sudo cat /home/docker/cp-test_ha-062500-m03_ha-062500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500 "sudo cat /home/docker/cp-test_ha-062500-m03_ha-062500.txt": (9.9073934s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m03:/home/docker/cp-test.txt ha-062500-m02:/home/docker/cp-test_ha-062500-m03_ha-062500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m03:/home/docker/cp-test.txt ha-062500-m02:/home/docker/cp-test_ha-062500-m03_ha-062500-m02.txt: (17.2797761s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m03 "sudo cat /home/docker/cp-test.txt": (9.9177294s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m02 "sudo cat /home/docker/cp-test_ha-062500-m03_ha-062500-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m02 "sudo cat /home/docker/cp-test_ha-062500-m03_ha-062500-m02.txt": (9.920802s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m03:/home/docker/cp-test.txt ha-062500-m04:/home/docker/cp-test_ha-062500-m03_ha-062500-m04.txt
E0719 04:55:10.128638    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m03:/home/docker/cp-test.txt ha-062500-m04:/home/docker/cp-test_ha-062500-m03_ha-062500-m04.txt: (17.275885s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m03 "sudo cat /home/docker/cp-test.txt": (9.9232367s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m04 "sudo cat /home/docker/cp-test_ha-062500-m03_ha-062500-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m04 "sudo cat /home/docker/cp-test_ha-062500-m03_ha-062500-m04.txt": (9.9066442s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 cp testdata\cp-test.txt ha-062500-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 cp testdata\cp-test.txt ha-062500-m04:/home/docker/cp-test.txt: (10.043859s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m04 "sudo cat /home/docker/cp-test.txt": (10.0300845s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3837681421\001\cp-test_ha-062500-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3837681421\001\cp-test_ha-062500-m04.txt: (9.9820791s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m04 "sudo cat /home/docker/cp-test.txt": (10.1162858s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m04:/home/docker/cp-test.txt ha-062500:/home/docker/cp-test_ha-062500-m04_ha-062500.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m04:/home/docker/cp-test.txt ha-062500:/home/docker/cp-test_ha-062500-m04_ha-062500.txt: (17.5022149s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m04 "sudo cat /home/docker/cp-test.txt": (9.9242638s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500 "sudo cat /home/docker/cp-test_ha-062500-m04_ha-062500.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500 "sudo cat /home/docker/cp-test_ha-062500-m04_ha-062500.txt": (9.9876437s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m04:/home/docker/cp-test.txt ha-062500-m02:/home/docker/cp-test_ha-062500-m04_ha-062500-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m04:/home/docker/cp-test.txt ha-062500-m02:/home/docker/cp-test_ha-062500-m04_ha-062500-m02.txt: (17.363523s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m04 "sudo cat /home/docker/cp-test.txt": (9.9534161s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m02 "sudo cat /home/docker/cp-test_ha-062500-m04_ha-062500-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m02 "sudo cat /home/docker/cp-test_ha-062500-m04_ha-062500-m02.txt": (10.0381803s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m04:/home/docker/cp-test.txt ha-062500-m03:/home/docker/cp-test_ha-062500-m04_ha-062500-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 cp ha-062500-m04:/home/docker/cp-test.txt ha-062500-m03:/home/docker/cp-test_ha-062500-m04_ha-062500-m03.txt: (17.4422589s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m04 "sudo cat /home/docker/cp-test.txt": (10.040031s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m03 "sudo cat /home/docker/cp-test_ha-062500-m04_ha-062500-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-062500 ssh -n ha-062500-m03 "sudo cat /home/docker/cp-test_ha-062500-m04_ha-062500-m03.txt": (9.8849268s)
--- PASS: TestMultiControlPlane/serial/CopyFile (657.44s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (204.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-112200 --driver=hyperv
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-112200 --driver=hyperv: (3m24.0404851s)
--- PASS: TestImageBuild/serial/Setup (204.04s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (10.18s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-112200
E0719 05:05:10.128184    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-112200: (10.1763002s)
--- PASS: TestImageBuild/serial/NormalBuild (10.18s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (9.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-112200
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-112200: (9.0939758s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (9.10s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-112200
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-112200: (7.960442s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.96s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-112200
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-112200: (7.7433352s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.74s)

                                                
                                    
x
+
TestJSONOutput/start/Command (247.54s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-644700 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0719 05:08:13.353811    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
E0719 05:10:10.133047    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-644700 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (4m7.5434918s)
--- PASS: TestJSONOutput/start/Command (247.54s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (8.09s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-644700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-644700 --output=json --user=testUser: (8.0930282s)
--- PASS: TestJSONOutput/pause/Command (8.09s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.94s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-644700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-644700 --output=json --user=testUser: (7.934817s)
--- PASS: TestJSONOutput/unpause/Command (7.94s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (35.14s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-644700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-644700 --output=json --user=testUser: (35.1374681s)
--- PASS: TestJSONOutput/stop/Command (35.14s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.41s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-558600 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-558600 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (278.3613ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"aa20c0db-ad65-4e3c-a930-b213fed745fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-558600] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"532bfdd0-86fd-4f5b-b5c1-b4b007f1864f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"3a2ac34a-6429-41c4-a9ae-4448d561142a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2b66e2f6-ada7-4779-98de-46ae5ab4ee3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"01fa1475-b7ea-4c23-b423-c393215a4faf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19302"}}
	{"specversion":"1.0","id":"16024195-2f08-4f8a-9cbb-52699324e367","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e475a81e-2796-4d23-bcf9-3d482d5c8cca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:11:36.185619    7784 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-558600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-558600
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-558600: (1.128108s)
--- PASS: TestErrorJSONOutput (1.41s)

                                                
                                    
x
+
TestMainNoArgs (0.26s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.26s)

                                                
                                    
x
+
TestMinikubeProfile (528.88s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-694400 --driver=hyperv
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-694400 --driver=hyperv: (3m18.4359368s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-694400 --driver=hyperv
E0719 05:15:10.128384    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-694400 --driver=hyperv: (3m21.9086697s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-694400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (22.0503349s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-694400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (21.9501693s)
helpers_test.go:175: Cleaning up "second-694400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-694400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-694400: (42.2211651s)
helpers_test.go:175: Cleaning up "first-694400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-694400
E0719 05:20:10.138872    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-694400: (41.4191828s)
--- PASS: TestMinikubeProfile (528.88s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (159.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-009400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-009400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m38.3912216s)
--- PASS: TestMountStart/serial/StartWithMountFirst (159.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.9s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-009400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-009400 ssh -- ls /minikube-host: (9.9022333s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (158.54s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-009400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0719 05:24:53.369803    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
E0719 05:25:10.140783    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-009400 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m37.5293292s)
--- PASS: TestMountStart/serial/StartWithMountSecond (158.54s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.76s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-009400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-009400 ssh -- ls /minikube-host: (9.761289s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.76s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (31.49s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-009400 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-009400 --alsologtostderr -v=5: (31.4865791s)
--- PASS: TestMountStart/serial/DeleteFirst (31.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.61s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-009400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-009400 ssh -- ls /minikube-host: (9.6136402s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.61s)

                                                
                                    
x
+
TestMountStart/serial/Stop (27.04s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-009400
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-009400: (27.0379158s)
--- PASS: TestMountStart/serial/Stop (27.04s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (121.04s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-009400
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-009400: (2m0.0295017s)
--- PASS: TestMountStart/serial/RestartStopped (121.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.84s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-009400 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-009400 ssh -- ls /minikube-host: (9.8420587s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.84s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (448.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-761300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0719 05:30:10.149496    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
E0719 05:35:10.152634    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-761300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (7m4.512599s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 status --alsologtostderr: (24.3151131s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (448.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-761300 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-761300 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-761300 -- rollout status deployment/busybox: (3.1124213s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-761300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-761300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-761300 -- exec busybox-fc5497c4f-22cdf -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-761300 -- exec busybox-fc5497c4f-22cdf -- nslookup kubernetes.io: (1.7655392s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-761300 -- exec busybox-fc5497c4f-n4tql -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-761300 -- exec busybox-fc5497c4f-22cdf -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-761300 -- exec busybox-fc5497c4f-n4tql -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-761300 -- exec busybox-fc5497c4f-22cdf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-761300 -- exec busybox-fc5497c4f-n4tql -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.13s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (234.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-761300 -v 3 --alsologtostderr
E0719 05:40:10.148338    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
E0719 05:41:33.384410    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-761300 -v 3 --alsologtostderr: (3m19.398735s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 status --alsologtostderr: (35.4873467s)
--- PASS: TestMultiNode/serial/AddNode (234.90s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-761300 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (11.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (11.7608159s)
--- PASS: TestMultiNode/serial/ProfileList (11.77s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (372.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 status --output json --alsologtostderr: (36.5420034s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 cp testdata\cp-test.txt multinode-761300:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 cp testdata\cp-test.txt multinode-761300:/home/docker/cp-test.txt: (9.785537s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300 "sudo cat /home/docker/cp-test.txt": (9.8354151s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 cp multinode-761300:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4110903034\001\cp-test_multinode-761300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 cp multinode-761300:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4110903034\001\cp-test_multinode-761300.txt: (9.7706592s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300 "sudo cat /home/docker/cp-test.txt": (9.6671469s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 cp multinode-761300:/home/docker/cp-test.txt multinode-761300-m02:/home/docker/cp-test_multinode-761300_multinode-761300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 cp multinode-761300:/home/docker/cp-test.txt multinode-761300-m02:/home/docker/cp-test_multinode-761300_multinode-761300-m02.txt: (16.8027416s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300 "sudo cat /home/docker/cp-test.txt": (9.6908573s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m02 "sudo cat /home/docker/cp-test_multinode-761300_multinode-761300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m02 "sudo cat /home/docker/cp-test_multinode-761300_multinode-761300-m02.txt": (9.5995935s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 cp multinode-761300:/home/docker/cp-test.txt multinode-761300-m03:/home/docker/cp-test_multinode-761300_multinode-761300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 cp multinode-761300:/home/docker/cp-test.txt multinode-761300-m03:/home/docker/cp-test_multinode-761300_multinode-761300-m03.txt: (16.7787798s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300 "sudo cat /home/docker/cp-test.txt": (9.603604s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m03 "sudo cat /home/docker/cp-test_multinode-761300_multinode-761300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m03 "sudo cat /home/docker/cp-test_multinode-761300_multinode-761300-m03.txt": (9.6063518s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 cp testdata\cp-test.txt multinode-761300-m02:/home/docker/cp-test.txt
E0719 05:45:10.151481    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 cp testdata\cp-test.txt multinode-761300-m02:/home/docker/cp-test.txt: (9.731147s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m02 "sudo cat /home/docker/cp-test.txt": (9.6043697s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 cp multinode-761300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4110903034\001\cp-test_multinode-761300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 cp multinode-761300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4110903034\001\cp-test_multinode-761300-m02.txt: (9.7680339s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m02 "sudo cat /home/docker/cp-test.txt": (9.626928s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 cp multinode-761300-m02:/home/docker/cp-test.txt multinode-761300:/home/docker/cp-test_multinode-761300-m02_multinode-761300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 cp multinode-761300-m02:/home/docker/cp-test.txt multinode-761300:/home/docker/cp-test_multinode-761300-m02_multinode-761300.txt: (16.6255377s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m02 "sudo cat /home/docker/cp-test.txt": (9.5254214s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300 "sudo cat /home/docker/cp-test_multinode-761300-m02_multinode-761300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300 "sudo cat /home/docker/cp-test_multinode-761300-m02_multinode-761300.txt": (9.6932934s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 cp multinode-761300-m02:/home/docker/cp-test.txt multinode-761300-m03:/home/docker/cp-test_multinode-761300-m02_multinode-761300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 cp multinode-761300-m02:/home/docker/cp-test.txt multinode-761300-m03:/home/docker/cp-test_multinode-761300-m02_multinode-761300-m03.txt: (16.9987469s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m02 "sudo cat /home/docker/cp-test.txt": (9.7733s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m03 "sudo cat /home/docker/cp-test_multinode-761300-m02_multinode-761300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m03 "sudo cat /home/docker/cp-test_multinode-761300-m02_multinode-761300-m03.txt": (9.6344248s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 cp testdata\cp-test.txt multinode-761300-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 cp testdata\cp-test.txt multinode-761300-m03:/home/docker/cp-test.txt: (9.899876s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m03 "sudo cat /home/docker/cp-test.txt": (9.8764001s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 cp multinode-761300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4110903034\001\cp-test_multinode-761300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 cp multinode-761300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4110903034\001\cp-test_multinode-761300-m03.txt: (9.9151964s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m03 "sudo cat /home/docker/cp-test.txt": (9.945455s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 cp multinode-761300-m03:/home/docker/cp-test.txt multinode-761300:/home/docker/cp-test_multinode-761300-m03_multinode-761300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 cp multinode-761300-m03:/home/docker/cp-test.txt multinode-761300:/home/docker/cp-test_multinode-761300-m03_multinode-761300.txt: (17.3770145s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m03 "sudo cat /home/docker/cp-test.txt": (9.8993113s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300 "sudo cat /home/docker/cp-test_multinode-761300-m03_multinode-761300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300 "sudo cat /home/docker/cp-test_multinode-761300-m03_multinode-761300.txt": (9.8480642s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 cp multinode-761300-m03:/home/docker/cp-test.txt multinode-761300-m02:/home/docker/cp-test_multinode-761300-m03_multinode-761300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 cp multinode-761300-m03:/home/docker/cp-test.txt multinode-761300-m02:/home/docker/cp-test_multinode-761300-m03_multinode-761300-m02.txt: (17.0645041s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m03 "sudo cat /home/docker/cp-test.txt": (9.7073904s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m02 "sudo cat /home/docker/cp-test_multinode-761300-m03_multinode-761300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 ssh -n multinode-761300-m02 "sudo cat /home/docker/cp-test_multinode-761300-m03_multinode-761300-m02.txt": (9.9436745s)
--- PASS: TestMultiNode/serial/CopyFile (372.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (78.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 node stop m03: (25.1822192s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-761300 status: exit status 7 (26.7998984s)

                                                
                                                
-- stdout --
	multinode-761300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-761300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-761300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:49:11.538263    5276 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-761300 status --alsologtostderr: exit status 7 (26.6734269s)

                                                
                                                
-- stdout --
	multinode-761300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-761300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-761300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 05:49:38.327870    7184 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0719 05:49:38.410230    7184 out.go:291] Setting OutFile to fd 864 ...
	I0719 05:49:38.411303    7184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:49:38.411372    7184 out.go:304] Setting ErrFile to fd 944...
	I0719 05:49:38.411372    7184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0719 05:49:38.425182    7184 out.go:298] Setting JSON to false
	I0719 05:49:38.425182    7184 mustload.go:65] Loading cluster: multinode-761300
	I0719 05:49:38.425182    7184 notify.go:220] Checking for updates...
	I0719 05:49:38.425744    7184 config.go:182] Loaded profile config "multinode-761300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0719 05:49:38.425744    7184 status.go:255] checking status of multinode-761300 ...
	I0719 05:49:38.426606    7184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:49:40.719686    7184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:49:40.719686    7184 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:49:40.719686    7184 status.go:330] multinode-761300 host status = "Running" (err=<nil>)
	I0719 05:49:40.719686    7184 host.go:66] Checking if "multinode-761300" exists ...
	I0719 05:49:40.721331    7184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:49:42.956862    7184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:49:42.956921    7184 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:49:42.956921    7184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:49:45.617478    7184 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:49:45.617478    7184 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:49:45.618025    7184 host.go:66] Checking if "multinode-761300" exists ...
	I0719 05:49:45.629744    7184 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 05:49:45.629744    7184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300 ).state
	I0719 05:49:47.810041    7184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:49:47.810041    7184 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:49:47.810363    7184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300 ).networkadapters[0]).ipaddresses[0]
	I0719 05:49:50.443149    7184 main.go:141] libmachine: [stdout =====>] : 172.28.162.16
	
	I0719 05:49:50.443255    7184 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:49:50.443478    7184 sshutil.go:53] new ssh client: &{IP:172.28.162.16 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300\id_rsa Username:docker}
	I0719 05:49:50.535029    7184 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9052256s)
	I0719 05:49:50.548685    7184 ssh_runner.go:195] Run: systemctl --version
	I0719 05:49:50.569553    7184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 05:49:50.595015    7184 kubeconfig.go:125] found "multinode-761300" server: "https://172.28.162.16:8443"
	I0719 05:49:50.595082    7184 api_server.go:166] Checking apiserver status ...
	I0719 05:49:50.606486    7184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 05:49:50.642488    7184 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2108/cgroup
	W0719 05:49:50.662435    7184 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2108/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0719 05:49:50.675828    7184 ssh_runner.go:195] Run: ls
	I0719 05:49:50.682276    7184 api_server.go:253] Checking apiserver healthz at https://172.28.162.16:8443/healthz ...
	I0719 05:49:50.690612    7184 api_server.go:279] https://172.28.162.16:8443/healthz returned 200:
	ok
	I0719 05:49:50.690612    7184 status.go:422] multinode-761300 apiserver status = Running (err=<nil>)
	I0719 05:49:50.690612    7184 status.go:257] multinode-761300 status: &{Name:multinode-761300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0719 05:49:50.690612    7184 status.go:255] checking status of multinode-761300-m02 ...
	I0719 05:49:50.691739    7184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:49:52.906912    7184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:49:52.907498    7184 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:49:52.907498    7184 status.go:330] multinode-761300-m02 host status = "Running" (err=<nil>)
	I0719 05:49:52.907498    7184 host.go:66] Checking if "multinode-761300-m02" exists ...
	I0719 05:49:52.908253    7184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:49:55.153546    7184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:49:55.154575    7184 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:49:55.154641    7184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:49:57.747448    7184 main.go:141] libmachine: [stdout =====>] : 172.28.167.151
	
	I0719 05:49:57.748447    7184 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:49:57.748580    7184 host.go:66] Checking if "multinode-761300-m02" exists ...
	I0719 05:49:57.760551    7184 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0719 05:49:57.760551    7184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m02 ).state
	I0719 05:49:59.926846    7184 main.go:141] libmachine: [stdout =====>] : Running
	
	I0719 05:49:59.926846    7184 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:49:59.927025    7184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-761300-m02 ).networkadapters[0]).ipaddresses[0]
	I0719 05:50:02.521126    7184 main.go:141] libmachine: [stdout =====>] : 172.28.167.151
	
	I0719 05:50:02.521876    7184 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:50:02.522129    7184 sshutil.go:53] new ssh client: &{IP:172.28.167.151 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-761300-m02\id_rsa Username:docker}
	I0719 05:50:02.614688    7184 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.854078s)
	I0719 05:50:02.627334    7184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 05:50:02.654209    7184 status.go:257] multinode-761300-m02 status: &{Name:multinode-761300-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0719 05:50:02.654209    7184 status.go:255] checking status of multinode-761300-m03 ...
	I0719 05:50:02.655207    7184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-761300-m03 ).state
	I0719 05:50:04.868857    7184 main.go:141] libmachine: [stdout =====>] : Off
	
	I0719 05:50:04.869292    7184 main.go:141] libmachine: [stderr =====>] : 
	I0719 05:50:04.869395    7184 status.go:330] multinode-761300-m03 host status = "Stopped" (err=<nil>)
	I0719 05:50:04.869395    7184 status.go:343] host is not running, skipping remaining checks
	I0719 05:50:04.869493    7184 status.go:257] multinode-761300-m03 status: &{Name:multinode-761300-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (78.66s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (198.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 node start m03 -v=7 --alsologtostderr
E0719 05:50:10.161474    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 node start m03 -v=7 --alsologtostderr: (2m41.6160635s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-761300 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-761300 status -v=7 --alsologtostderr: (36.9353551s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (198.72s)

                                                
                                    
x
+
TestPreload (529.23s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-792800 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0719 06:05:10.172560    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-792800 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m35.2531172s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-792800 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-792800 image pull gcr.io/k8s-minikube/busybox: (8.9443659s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-792800
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-792800: (39.8018618s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-792800 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0719 06:10:10.174543    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-792800 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m35.2013937s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-792800 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-792800 image list: (7.3931463s)
helpers_test.go:175: Cleaning up "test-preload-792800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-792800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-792800: (42.6359451s)
--- PASS: TestPreload (529.23s)

                                                
                                    
x
+
TestScheduledStopWindows (333.65s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-331200 --memory=2048 --driver=hyperv
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-331200 --memory=2048 --driver=hyperv: (3m19.4494771s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-331200 --schedule 5m
E0719 06:14:53.412625    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-331200 --schedule 5m: (10.9847661s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-331200 -n scheduled-stop-331200
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-331200 -n scheduled-stop-331200: exit status 1 (10.0226729s)

                                                
                                                
** stderr ** 
	W0719 06:14:53.677862   12352 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-331200 -- sudo systemctl show minikube-scheduled-stop --no-page
E0719 06:15:10.181794    9604 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-811100\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-331200 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.8302663s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-331200 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-331200 --schedule 5s: (10.7896572s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-331200
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-331200: exit status 7 (2.5449451s)

                                                
                                                
-- stdout --
	scheduled-stop-331200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 06:16:24.337448    7288 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-331200 -n scheduled-stop-331200
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-331200 -n scheduled-stop-331200: exit status 7 (2.4828308s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 06:16:26.880486    6600 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-331200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-331200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-331200: (27.5272952s)
--- PASS: TestScheduledStopWindows (333.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-359900 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-359900 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (428.8098ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-359900] minikube v1.33.1 on Microsoft Windows 10 Enterprise N 10.0.19045.4651 Build 19045.4651
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=19302
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0719 06:16:56.933498   12024 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube6\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.43s)

                                                
                                    

Test skip (22/144)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard